Game Programming in C++ - Creating 3D Games
Game Programming in C++ - Creating 3D Games
Game Programming in C++ - Creating 3D Games
1. Cover Page
2. About This E-Book
3. Title Page
4. Copyright Page
5. Dedication Page
6. Contents at a Glance
7. Contents
8. Preface
9. Acknowledgments
10. About the Author
11. Chapter 1: Game Programming Overview
1. Microsoft Windows
2. Apple macOS
1. Anatomy of a Frame
2. Implementing a Skeleton Game Class
3. Main Function
4. Basic Input Processing
5. Basic 2D Graphics
7. Game Project
8. Summary
9. Additional Reading
10. Exercises
1. Exercise 1.1
2. Exercise 1.2
1. Game Objects
2. Sprites
3. Scrolling Backgrounds
4. Game Project
5. Summary
6. Additional Reading
7. Exercises
1. Exercise 2.1
2. Exercise 2.2
3. Exercise 2.3
13. Chapter 3: Vectors and Basic Physics
1. Vectors
2. Basic Movement
3. Newtonian Physics
1. Circle-Versus-Circle Intersection
2. Creating a CircleComponent Subclass
5. Game Project
6. Summary
7. Additional Reading
8. Exercises
1. Exercise 3.1
2. Exercise 3.2
3. Exercise 3.3
2. Pathfinding
1. Graphs
2. Breadth-First Search
3. Heuristics
4. Greedy Best-First Search
5. A* Search
6. Dijkstra’s Algorithm
7. Following a Path
8. Other Graph Representations
3. Game Trees
1. Minimax
2. Handling Incomplete Game Trees
3. Alpha-Beta Pruning
4. Game Project
5. Summary
6. Additional Reading
7. Exercises
1. Exercise 4.1
2. Exercise 4.2
2. Triangle Basics
1. Why Polygons?
2. Normalized Device Coordinates
3. Vertex and Index Buffers
3. Shaders
1. Vertex Shaders
2. Fragment Shaders
3. Writing Basic Shaders
4. Loading Shaders
5. Drawing Triangles
4. Transformation Basics
1. Object Space
2. World Space
3. Transforming to World Space
1. Matrix Multiplication
2. Transforming a Point by Using a Matrix
3. Transforming to World Space, Revisited
4. Adding World Transforms to Actor
5. Transforming from World Space to Clip Space
6. Updating Shaders to Use Transform Matrices
6. Texture Mapping
7. Game Project
8. Summary
9. Additional Reading
10. Exercises
1. Exercise 5.1
2. Exercise 5.2
2. Loading 3D Models
3. Drawing 3D Meshes
4. Lighting
5. Game Project
6. Summary
7. Additional Reading
8. Exercises
1. Exercise 6.1
2. Exercise 6.2
1. Bootstrapping Audio
1. FMOD
2. Installing FMOD
3. Creating an Audio System
4. Banks and Events
5. The SoundEvent Class
2. 3D Positional Audio
1. Buses
2. Snapshots
3. Occlusion
4. Game Project
5. Summary
6. Additional Reading
7. Exercises
1. Exercise 7.1
2. Exercise 7.2
1. Input Devices
1. Polling
2. Positive and Negative Edges
3. Events
4. Basic InputSystem Architecture
2. Keyboard Input
3. Mouse Input
4. Controller Input
5. Input Mappings
6. Game Project
7. Summary
8. Additional Reading
9. Exercises
1. Exercise 8.1
2. Exercise 8.2
19. Chapter 9: Cameras
1. First-Person Camera
2. Follow Camera
3. Orbit Camera
4. Spline Camera
5. Unprojection
6. Game Project
7. Summary
8. Additional Reading
9. Exercises
1. Exercise 9.1
2. Exercise 9.2
1. Geometric Types
1. Line Segments
2. Planes
3. Bounding Volumes
2. Intersection Tests
4. Game Project
5. Summary
6. Additional Reading
7. Exercises
1. Exercise 10.1
2. Exercise 10.2
3. Exercise 10.3
1. Font Rendering
2. UI Screens
3. HUD Elements
4. Localization
1. Exercise 11.1
2. Exercise 11.2
3. Exercise 11.3
3. Game Project
4. Summary
5. Additional Reading
6. Exercises
1. Exercise 12.1
2. Exercise 12.2
2. Rendering to Textures
3. Deferred Shading
4. Game Project
5. Summary
6. Additional Reading
7. Exercises
1. Exercise 13.1
2. Exercise 13.2
3. Binary Data
4. Game Project
5. Summary
6. Additional Reading
7. Exercises
1. Exercise 14.1
2. Exercise 14.2
1. ii
2. iii
3. iv
4. v
5. vi
6. vii
7. viii
8. ix
9. x
10. xi
11. xii
12. xiii
13. xiv
14. xv
15. xvi
16. xvii
17. xviii
18. xix
19. xx
20. xxii
21. xxiii
22. xxiv
23. 1
24. 2
25. 3
26. 4
27. 5
28. 6
29. 7
30. 8
31. 9
32. 10
33. 11
34. 12
35. 13
36. 14
37. 15
38. 16
39. 17
40. 18
41. 19
42. 20
43. 21
44. 22
45. 23
46. 24
47. 25
48. 26
49. 27
50. 28
51. 29
52. 30
53. 31
54. 32
55. 33
56. 34
57. 35
58. 36
59. 37
60. 38
61. 39
62. 40
63. 41
64. 42
65. 43
66. 44
67. 45
68. 46
69. 47
70. 48
71. 49
72. 50
73. 51
74. 52
75. 53
76. 54
77. 55
78. 56
79. 57
80. 58
81. 59
82. 60
83. 61
84. 62
85. 63
86. 64
87. 65
88. 66
89. 67
90. 68
91. 69
92. 70
93. 71
94. 72
95. 73
96. 74
97. 75
98. 76
99. 77
100. 78
101. 79
102. 80
103. 81
104. 82
105. 83
106. 84
107. 85
108. 86
109. 87
110. 88
111. 89
112. 90
113. 91
114. 92
115. 93
116. 94
117. 95
118. 96
119. 97
120. 98
121. 99
122. 100
123. 101
124. 102
125. 103
126. 104
127. 105
128. 106
129. 107
130. 108
131. 109
132. 110
133. 111
134. 112
135. 113
136. 114
137. 115
138. 116
139. 117
140. 118
141. 119
142. 120
143. 121
144. 122
145. 123
146. 124
147. 125
148. 126
149. 127
150. 128
151. 129
152. 130
153. 131
154. 132
155. 133
156. 134
157. 135
158. 136
159. 137
160. 138
161. 139
162. 140
163. 141
164. 142
165. 143
166. 144
167. 145
168. 146
169. 147
170. 148
171. 149
172. 150
173. 151
174. 152
175. 153
176. 154
177. 155
178. 156
179. 157
180. 158
181. 159
182. 160
183. 161
184. 162
185. 163
186. 164
187. 165
188. 166
189. 167
190. 168
191. 169
192. 170
193. 171
194. 172
195. 173
196. 174
197. 175
198. 176
199. 177
200. 178
201. 179
202. 180
203. 181
204. 182
205. 183
206. 184
207. 185
208. 186
209. 187
210. 188
211. 189
212. 190
213. 191
214. 192
215. 193
216. 194
217. 195
218. 196
219. 197
220. 198
221. 199
222. 200
223. 201
224. 202
225. 203
226. 204
227. 205
228. 206
229. 207
230. 208
231. 209
232. 210
233. 211
234. 212
235. 213
236. 214
237. 215
238. 216
239. 217
240. 218
241. 219
242. 220
243. 221
244. 222
245. 223
246. 224
247. 225
248. 226
249. 227
250. 228
251. 229
252. 230
253. 231
254. 232
255. 233
256. 234
257. 235
258. 236
259. 237
260. 238
261. 239
262. 240
263. 241
264. 242
265. 243
266. 244
267. 245
268. 246
269. 247
270. 248
271. 249
272. 250
273. 251
274. 252
275. 253
276. 254
277. 255
278. 256
279. 257
280. 258
281. 259
282. 260
283. 261
284. 262
285. 263
286. 264
287. 265
288. 266
289. 267
290. 268
291. 269
292. 270
293. 271
294. 272
295. 273
296. 274
297. 275
298. 276
299. 277
300. 278
301. 279
302. 280
303. 281
304. 282
305. 283
306. 284
307. 285
308. 286
309. 287
310. 288
311. 289
312. 290
313. 291
314. 292
315. 293
316. 294
317. 295
318. 296
319. 297
320. 298
321. 299
322. 300
323. 301
324. 302
325. 303
326. 304
327. 305
328. 306
329. 307
330. 308
331. 309
332. 310
333. 311
334. 312
335. 313
336. 314
337. 315
338. 316
339. 317
340. 318
341. 319
342. 320
343. 321
344. 322
345. 323
346. 324
347. 325
348. 326
349. 327
350. 328
351. 329
352. 330
353. 331
354. 332
355. 333
356. 334
357. 335
358. 336
359. 337
360. 338
361. 339
362. 340
363. 341
364. 342
365. 343
366. 344
367. 345
368. 346
369. 347
370. 348
371. 349
372. 350
373. 351
374. 352
375. 353
376. 354
377. 355
378. 356
379. 357
380. 358
381. 359
382. 360
383. 361
384. 362
385. 363
386. 364
387. 365
388. 366
389. 367
390. 368
391. 369
392. 370
393. 371
394. 372
395. 373
396. 374
397. 375
398. 376
399. 377
400. 378
401. 379
402. 380
403. 381
404. 382
405. 383
406. 384
407. 385
408. 386
409. 387
410. 388
411. 389
412. 390
413. 391
414. 392
415. 393
416. 394
417. 395
418. 396
419. 397
420. 398
421. 399
422. 400
423. 401
424. 402
425. 403
426. 404
427. 405
428. 406
429. 407
430. 408
431. 409
432. 410
433. 411
434. 412
435. 413
436. 414
437. 415
438. 416
439. 417
440. 418
441. 419
442. 420
443. 421
444. 422
445. 423
446. 424
447. 425
448. 426
449. 427
450. 428
451. 429
452. 430
453. 431
454. 432
455. 433
456. 434
457. 435
458. 436
459. 437
460. 438
461. 439
462. 440
463. 441
464. 442
465. 443
466. 444
467. 445
468. 446
469. 447
470. 448
471. 449
472. 450
473. 451
474. 452
475. 453
476. 454
477. 455
478. 456
479. 457
480. 458
481. 459
482. 460
483. 461
484. 462
485. 463
486. 464
487. 465
488. 466
489. 467
490. 468
491. 469
492. 470
493. 471
494. 472
495. 473
496. 474
497. 475
498. 476
499. 477
500. 478
501. 479
502. 480
503. 481
504. 482
505. 483
506. 484
507. 485
508. 486
509. 487
510. 488
511. 489
512. 490
513. 491
514. 492
515. 493
516. 494
517. 495
518. 496
519. 497
520. 498
521. 499
522. 500
523. 501
524. 502
525. 503
526. 504
About This E-Book
Sanjay Madhav
Editor-in-Chief
Mark Taub
Executive Editor
Laura Lewin
Development Editor
Michael Thurston
Managing Editor
Sandra Schroeder
Tech Editors
Josh Glazer
Brian Overland
Matt Whiting
Senior Project Editor
Lori Lyons
Production Manager
Dhayanidhi Karunanidhi
Copy Editor
Kitty Wilson
Indexer
Lisa Stumpf
Proofreader
Larry Sulky
Editorial Assistant
Courtney Martin
Cover Designer
Chuti Prasertsith
Compositor
codemantra
To my family and friends: Thanks for the support.
Contents at a Glance
Preface
Acknowledgments
About the Author
4 Artificial Intelligence
5 OpenGL
6 3D Graphics
7 Audio
8 Input Systems
9 Cameras
10 Collision Detection
11 User Interfaces
12 Skeletal Animation
13 Intermediate Graphics
14 Level Files and Binary Data
Basic 2D Graphics
The Color Buffer
Double Buffering
Game Project
Summary
Additional Reading
Exercises
Exercise 1.1
Exercise 1.2
Sprites
Loading Image Files
Drawing Sprites
Animating Sprites
Scrolling Backgrounds
Game Project
Summary
Additional Reading
Exercises
Exercise 2.1
Exercise 2.2
Exercise 2.3
3 Vectors and Basic Physics
Vectors
Getting a Vector between Two Points: Subtraction
Scaling a Vector: Scalar Multiplication
Combining Two Vectors: Addition
Determining a Distance: Length
Determining Directions: Unit Vectors and Normalization
Converting from an Angle to a Forward Vector
Converting a Forward Vector to an Angle: Arctangent
Determining the Angle between Two Vectors: Dot Product
Calculating a Normal: Cross Product
Basic Movement
Creating a Basic MoveComponent Class
Newtonian Physics
Linear Mechanics Overview
Game Project
Summary
Additional Reading
Exercises
Exercise 3.1
Exercise 3.2
Exercise 3.3
4 Artificial Intelligence
State Machine Behaviors
Designing a State Machine
Basic State Machine Implementation
States as Classes
Pathfinding
Graphs
Breadth-First Search
Heuristics
Greedy Best-First Search
A* Search
Dijkstra’s Algorithm
Following a Path
Other Graph Representations
Game Trees
Minimax
Handling Incomplete Game Trees
Alpha-Beta Pruning
Game Project
Summary
Additional Reading
Exercises
Exercise 4.1
Exercise 4.2
5 OpenGL
Initializing OpenGL
Setting Up the OpenGL Window
The OpenGL Context and Initializing GLEW
Rendering a Frame
Triangle Basics
Why Polygons?
Normalized Device Coordinates
Vertex and Index Buffers
Shaders
Vertex Shaders
Fragment Shaders
Writing Basic Shaders
Loading Shaders
Drawing Triangles
Transformation Basics
Object Space
World Space
Transforming to World Space
Texture Mapping
Loading the Texture
Updating the Vertex Format
Updating the Shaders
Alpha Blending
Game Project
Summary
Additional Reading
Exercises
Exercise 5.1
Exercise 5.2
6 3D Graphics
The Actor Transform in 3D
Transform Matrices for 3D
Euler Angles
Quaternions
New Actor Transform in Action
Loading 3D Models
Choosing a Model Format
Updating the Vertex Attributes
Loading a gpmesh File
Drawing 3D Meshes
Transforming to Clip Space, Revisited
Lighting
Revisiting Vertex Attributes
Types of Lights
Phong Reflection Model
Implementing Lighting
Game Project
Summary
Additional Reading
Exercises
Exercise 6.1
Exercise 6.2
7 Audio
Bootstrapping Audio
FMOD
Installing FMOD
3D Positional Audio
Setting Up a Basic Listener
Adding Positional Functionality to SoundEvent
Game Project
Summary
Additional Reading
Exercises
Exercise 7.1
Exercise 7.2
8 Input Systems
Input Devices
Polling
Positive and Negative Edges
Events
Keyboard Input
Mouse Input
Buttons and Position
Relative Motion
Scroll Wheel
Controller Input
Enabling a Single Controller
Buttons
Analog Sticks and Triggers
Filtering Analog Sticks in Two Dimensions
Input Mappings
Game Project
Summary
Additional Reading
Exercises
Exercise 8.1
Exercise 8.2
9 Cameras
First-Person Camera
Basic First-Person Movement
Camera (Without Pitch)
Adding Pitch
First-Person Model
Follow Camera
Basic Follow Camera
Adding a Spring
Orbit Camera
Spline Camera
Unprojection
Game Project
Summary
Additional Reading
Exercises
Exercise 9.1
Exercise 9.2
10 Collision Detection
Geometric Types
Line Segments
Planes
Bounding Volumes
Intersection Tests
Contains Point Tests
Bounding Volume Tests
Game Project
Summary
Additional Reading
Exercises
Exercise 10.1
Exercise 10.2
Exercise 10.3
11 User Interfaces
Font Rendering
UI Screens
The UI Screen Stack
Dialog Boxes
HUD Elements
Adding an Aiming Reticule
Adding Radar
Localization
Working with Unicode
Exercise 11.2
Exercise 11.3
12 Skeletal Animation
Foundations of Skeletal Animation
Skeletons and Poses
Game Project
Summary
Additional Reading
Exercises
Exercise 12.1
Exercise 12.2
13 Intermediate Graphics
Improving Texture Quality
Texture Sampling, Revisited
Mipmapping
Anisotropic Filtering
Rendering to Textures
Creating the Texture
Creating a Framebuffer Object
Rendering to a Framebuffer Object
Deferred Shading
Creating a G-Buffer Class
Writing to the G-buffer
Global Lighting
Adding Point Lights
Game Project
Summary
Additional Reading
Exercises
Exercise 13.1
Exercise 13.2
Loading Actors
Loading Components
Binary Data
Saving a Binary Mesh File
Loading a Binary Mesh File
Game Project
Summary
Additional Reading
Exercises
Exercise 14.1
Exercise 14.2
Index
PREFACE
If you take a step back, you can see that many game
engines and tools are, at their core, written in C++. This
means that C++ is ultimately the technology behind
every game created using one of these tools.
Chapter 14, “Level Files and Binary Data,” discusses how to load
and save level files, as well as how to write binary file formats.
void DoSomething()
// Do the thing
ThisDoesSomething();
note
Notes contain some useful information about implementation changes or
other features that are worth noting.
tip
Tips provide hints on how to add certain additional features to your code.
warning
Warnings call out specific pitfalls that warrant caution.
SIDEBAR
GAME PROGRAMMING
OVERVIEW
Microsoft Windows
For Windows development, the most popular IDE by
far is Microsoft Visual Studio. Visual Studio also
tends to be the most popular IDE for C++ game
developers, with most PC and console developers
gravitating toward the IDE.
warning
THERE ARE DIFFERENT VERSIONS OF VISUAL STUDIO: There are
several other products in the Microsoft Visual Studio suite, including Visual
Studio Code and Visual Studio for Mac. Neither of these products are the
same thing as Visual Studio Community 2017, so be careful to install the
correct version!
Apple macOS
On macOS, Apple provides the free Xcode IDE for
development of programs for macOS, iOS, and other
related platforms. The code for this book works in
both Xcode 8 and 9. Note that Xcode 8 requires
macOS 10.11 El Capitan or higher, while Xcode 9
requires macOS 10.12 Sierra or higher.
Anatomy of a Frame
At a high level, a game performs the following steps
on each frame:
note
This style of game loop is single-threaded, meaning it does not take
advantage of modern CPUs that can execute multiple threads
simultaneously. Making a game loop that supports multiple threads is very
complex, and not necessary for games that are smaller in scope. A good
book to learn more about multi-threaded game loops is Jason Gregory’s,
listed in the “Additional Reading” section at the end of this chapter.
void Game::RunLoop()
while (!mShouldQuit)
// Process Inputs
JoystickData j = GetJoystickData();
// Update Game World
UpdatePlayerPosition(j);
if (g.Collides(player))
else
g.Update();
// ...
// Generate Outputs
RenderGraphics();
RenderAudio();
class Game
public:
Game();
bool Initialize();
void RunLoop();
void Shutdown();
private:
void ProcessInput();
void UpdateGame();
void GenerateOutput();
SDL_Window* mWindow;
bool mIsRunning;
};
Game::Initialize
The Initialize function returns true if
initialization succeeds and false otherwise. You
need to initialize the SDL library with the SDL_Init
function. This function takes in a single parameter, a
bitwise-OR of all subsystems to initialize. For now,
you only need to initialize the video subsystem,
which you do as follows:
if (sdlResult != 0)
return false;
Flag Subsystem
SDL_INIT_AUDIO Audio device management,
playback, and recording
mWindow = SDL_CreateWindow(
if (!mWindow)
return false;
Flag Result
Game::Shutdown
The Shutdown function does the opposite of
Initialize. It first destroys the SDL_Window with
SDL_DestroyWindow and then closes SDL with
SDL_Quit:
void Game::Shutdown()
SDL_DestroyWindow(mWindow);
SDL_Quit();
Game::RunLoop
The RunLoop function keeps running iterations of
the game loop until mIsRunning becomes false, at
which point the function returns. Because you have
the three helper functions for each phase of the game
loop, RunLoop simply calls these helper functions
inside the loop:
void Game::RunLoop()
while (mIsRunning)
ProcessInput();
UpdateGame();
GenerateOutput();
Main Function
Although the Game class is a handy encapsulation of
the game’s behavior, the entry point of any C++
program is the main function. You must implement
a main function (in Main.cpp) as shown in Listing
1.3.
Game game;
if (success)
game.RunLoop();
game.Shutdown();
return 0;
With this code in place, you can now run the game
project. When you do, you see a blank window, as shown
in Figure 1.1 (though on macOS, this window may appear
black instead of white). Of course, there’s a problem: The
game never ends! Because no code changes the
mIsRunning member variable, the game loop never
ends, and the RunLoop function never returns.
Naturally, the next step is to fix this problem by allowing
the player to quit the game.
void Game::ProcessInput()
{
SDL_Event event;
while (SDL_PollEvent(&event))
SDL_Event event;
while (SDL_PollEvent(&event))
switch (event.type)
{
SDL_Event event;
while (SDL_PollEvent(&event))
switch (event.type)
case SDL_QUIT:
mIsRunning = false;
break;
}
Now when the game is running, clicking the X on the
window causes the while loop inside RunLoop to
terminate, which in turn shuts down the game and exits
the program. But what if you want the game to quit when
the user presses the Escape key? While you could check
for a keyboard event corresponding to this, an easier
approach is to grab the entire state of the keyboard with
SDL_GetKeyboardState, which returns a pointer to
an array that contains the current state of the keyboard:
if (state[SDL_SCANCODE_ESCAPE])
mIsRunning = false;
void Game::ProcessInput()
SDL_Event event;
while (SDL_PollEvent(&event))
switch (event.type)
case SDL_QUIT:
mIsRunning = false;
break;
if (state[SDL_SCANCODE_ESCAPE])
mIsRunning = false;
BASIC 2D GRAPHICS
Before you can implement the “generate outputs”
phase of the game loop, you need some
understanding of how 2D graphics work for games
note
Many game programmers also use the term framebuffer to reference the
location in memory that contains the color data for a frame. However, a more
precise definition of framebuffer is that it is the combination of the color buffer
and other buffers (such as the depth buffer and stencil buffer). In the interest
of clarity, this book references the specific buffers.
Double Buffering
As mentioned earlier in this chapter, games update
several times per second (at the common rates of 30
and 60 FPS). If a game updates the color buffer at
the same rate, this gives the illusion of motion, much
the way a flipbook appears to show an object in
motion when you flip through the pages.
SDL_Renderer* mRenderer;
mRenderer = SDL_CreateRenderer(
-1, // Usually -1
SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC
);
SDL_DestroyRenderer(mRenderer);
First, let’s worry about the first and third steps. Because
graphics are an output, it makes sense to put graphics
drawing code in Game::GenerateOutput.
To clear the back buffer, you first need to specify a color
with SDL_SetRenderDrawColor. This function takes
in a pointer to the renderer, as well as the four RGBA
components (from 0 to 255). For example, to set the
color as blue with 100% opacity, use the following:
SDL_SetRenderDrawColor(
mRenderer,
0, // R
0, // G
255, // B
255 // A
);
SDL_RenderClear(mRenderer);
SDL_RenderPresent(mRenderer);
With this code in place, if you now run the game, you’ll
see a filled-in blue window, as shown in Figure 1.5.
SDL_Rect wall{
0, // Top left x
0, // Top left y
1024, // Width
thickness // Height
};
Here, the x/y coordinates of the top-left corner are (0, 0),
meaning the rectangle will be at the top left of the screen.
You hard-code the width of the rectangle to 1024,
corresponding to the width of the window. (It’s generally
frowned upon to assume a fixed window size, as is done
here, and you’ll remove this assumption in later
chapters.) The thickness variable is const int set to
15, which makes it easy to adjust the thickness of the
wall.
SDL_RenderFillRect(mRenderer, &wall);
The game then draws a wall in the top part of the screen.
You can use similar code to draw the bottom wall and the
right wall, only changing the parameters of the
SDL_Rect. For example, the bottom wall could have the
same rectangle as the top wall except that the top-left y
coordinate could be 768 - thickness.
struct Vector2
float x;
float y;
};
SDL_Rect ball{
static_cast<int>(mBallPos.x - thickness/2),
static_cast<int>(mBallPos.y - thickness/2),
thickness,
thickness
};
With all these ways real time and game time might
diverge, it’s clear that the “update game” phase of the
game loop should account for elapsed game time.
enemy.mPosition.x += 5;
Now the code will work well regardless of the frame rate.
At 30 FPS, the delta time is ~0.033, so the enemy will
move 5 pixels per frame, for a total of 150 pixels per
second. At 60 FPS, the enemy will move only 2.5 pixels
per frame but will still move a total of 150 pixels per
second. The movement certainly will be smoother in the
60 FPS case, but the overall per-second speed remains
the same.
Uint32 mTicksCount;
void Game::UpdateGame()
// (converted to seconds)
mTicksCount = SDL_GetTicks();
// ...
}
Consider what happens the very first time you call
UpdateGame. Because mTicksCount starts at zero, you
end up with some positive value of SDL_GetTicks (the
milliseconds since initialization) and divide it by
1000.0f to get a delta time in seconds. Next, you save
the current value of SDL_GetTicks in mTicksCount.
On the next frame, the deltaTime line calculates a new
delta time based on the old value of mTicksCount and
the new value. Thus, on every frame, you compute a delta
time based on the ticks elapsed since the previous frame.
You also must watch out for a delta time that’s too high.
Most notably, this happens when stepping through game
code in the debugger. For example, if you pause at a
breakpoint in the debugger for five seconds, you’ll end up
with a huge delta time, and everything will jump far
forward in the simulation. To fix this problem, you can
clamp the delta time to a maximum value (such as
0.05f). This way, the game simulation will never jump
too far forward on any one frame. This yields the version
of Game::UpdateGame in Listing 1.5. While you aren’t
updating the position of the paddle or ball just yet, you
are at least calculating the delta time value.
void Game::UpdateGame()
deltaTime = 0.05f;
mPaddleDir = 0;
if (state[SDL_SCANCODE_W])
mPaddleDir -= 1;
if (state[SDL_SCANCODE_S])
mPaddleDir += 1;
if (mPaddleDir != 0)
if (mPaddleDir != 0)
Next, you need code that bounces the ball off walls. The
code for determining whether the ball collides with a wall
is like the code for checking whether the paddle is
offscreen. For example, the ball collides with the top wall
if its y position is less than or equal to the height of the
ball.
For the case of the top wall, this yields code like the
following:
mBallVel.y *= -1;
mBallVel.y *= -1;
This way, if the ball collides with the top wall but is
moving away from the wall, you do not negate the y
velocity.
The code for colliding against the bottom and right walls
is very similar to the code for colliding against the top
wall. Colliding against the paddle, however, is slightly
more complex. First, you calculate the absolute value of
the difference between the y position of the ball and the y
position of the paddle. If this difference is greater than
half the height of the paddle, the ball is too high or too
low, as shown earlier in Figure 1.7(b). You also need to
check that the ball’s x-position lines up with the paddle,
and the ball is not trying to move away from the paddle.
Satisfying all these conditions means the ball collides
with the paddle, and you should negate the x velocity:
mBallVel.x *= -1.0f;
With this code complete, the ball and paddle now both
move onscreen, as in Figure 1.8. You have now
completed your simple version of Pong!
Figure 1.8 Final version of Pong
GAME PROJECT
This chapter’s game project implements the full Pong
game code constructed throughout the chapter. To
control the paddle, the player uses the W and S keys.
The game ends when the ball moves offscreen. The
code is available in the book’s GitHub repository in
the Chapter01 directory. Open Chapter01-
windows.sln in Windows and Chapter01-
mac.xcodeproj on Mac. (For instructions on how
to access the GitHub repository, consult the
instructions at the beginning of this chapter.)
SUMMARY
Real-time games update many times per second via a
loop called the game loop. Each iteration of this loop
is a frame. For example, 60 frames per second means
that there are 60 iterations of the game loop per
second. The game loop has three main phases that it
completes every frame: processing input, updating
the game world, and generating output. Input
involves not only input devices such as the keyboard
and mouse but networking data, replay data, and so
on. Outputs include graphics, audio, and force
feedback controllers.
ADDITIONAL READING
Jason Gregory dedicates several pages to discussing
the different formulations of a game loop, including
how some games take better advantage of multi-core
CPUs. There are also many excellent references
online for the various libraries used; for example, the
SDL API reference is handy.
EXERCISES
Both of this chapter’s exercises focus on modifying
your version of Pong. The first exercise involves
adding a second player, and the second exercise
involves adding support for multiple balls.
Exercise 1.1
The original version of Pong supported two players.
Remove the right wall onscreen and replace that wall
with a second paddle for player 2. For this second
paddle, use the I and K keys to move the paddle up
and down. Supporting a second paddle requires
duplicating all the functionality of the first paddle: a
member variable for the paddle’s position, the
direction, code to process input for player 2, code
that draws the paddle, and code that updates the
paddle. Finally, make sure to update the ball collision
code so that the ball correctly collides with both
paddles.
Exercise 1.2
Many pinball games support “multiball,” where
multiple balls are in play at once. It turns out
multiball is also fun for Pong! To support multiple
balls, create a Ball struct that contains two
Vector2s: one for the position and one for the
velocity. Next, create a std::vector<Ball>
member variable for Game to store these different
balls. Then change the code in Game::Initialize
to initialize the positions and velocities of several
balls. In Game::UpdateGame, change the ball
update code so that rather than using the individual
mBallVel and mBallPos variables, the code loops
over the std::vector for all the balls.
CHAPTER 2
class Actor
public:
};
public:
};
class GameObject
public:
void AddComponent(Component* comp);
private:
std::unordered_set<Component*> mComponents;
};
class Actor
public:
enum State
EActive,
EPaused,
EDead
};
// Constructor/destructor
virtual ~Actor();
// Getters/setters
// ...
// Add/remove components
private:
// Actor's state
State mState;
// Transform
};
public:
// Constructor
// (the lower the update order, the earlier the component updates)
// Destructor
virtual ~Component();
protected:
// Owning actor
int mUpdateOrder;
};
Other Approaches
There are many other approaches to game object
models. Some use interface classes to declare the
different possible functionalities, and each game
object then implements the interfaces necessary to
represent it. Other approaches extend the
component model a step further and eliminate the
containing game object entirely. Instead, these
approaches use a component database that tracks
objects with a numeric identifier. Still other
approaches define objects by their properties. In
these systems, adding a health property to an object
means that it can take and receive damage.
mPendingActors.emplace_back(actor);
else
mActors.emplace_back(actor);
void Game::UpdateGame()
{
mUpdatingActors = true;
actor->Update(deltaTime);
mUpdatingActors = false;
mActors.emplace_back(pending);
mPendingActors.clear();
if (actor->GetState() == Actor::EDead)
deadActors.emplace_back(actor);
delete actor;
while (!mActors.empty())
delete mActors.back();
SPRITES
A sprite is a visual object in a 2D game, typically
used to represent characters, backgrounds, and other
dynamic objects. Most 2D games have dozens if not
hundreds of sprites, and for mobile games, the sprite
data accounts for much of the overall download size
of the game. Because of the prevalence of sprites in
2D games, it is important to use them as efficiently
as possible.
IMG_Init(IMG_INIT_PNG)
Table 2.1 lists the supported file formats. Note that SDL
already supports the BMP file format without SDL
Image, which is why there is no IMG_INIT_BMP flag in
this table.
Flag Format
IMG_INIT_JPG JPEG
IMG_INIT_PNG PNG
IMG_INIT_TIF TIFF
Once SDL Image is initialized, you can use IMG_Load to
load an image file into an SDL_Surface:
SDL_Surface* IMG_Load(
);
SDL_Texture* SDL_CreateTextureFromSurface(
);
if (!surf)
return nullptr;
SDL_FreeSurface(surf);
if (!text)
return nullptr;
return text;
}
An interesting question is where to store the loaded
textures. It’s very common for a game to use the same
image file multiple times for multiple different actors. If
there are 20 asteroids, and each asteroid uses the same
image file, it doesn’t make sense to load the file from disk
20 times.
note
While a map of filenames to SDL_Texture pointers makes sense in a
simple case, consider that a game has many different types of assets—
textures, sound effects, 3D models, fonts, and so on. Therefore, writing a
more robust system to generically handle all types of assets makes sense.
But in the interest of simplicity, this book does not implement such an asset
management system.
Drawing Sprites
Suppose a game has a basic 2D scene with a
background image and a character. A simple way to
draw this scene is by first drawing the background
image and then the character. This is like how a
painter would paint the scene, and hence this
approach is known as the painter’s algorithm. In
the painter’s algorithm, the game draws the sprites in
back-to-front order. Figure 2.3 demonstrates the
painter’s algorithm, first drawing the background
star field, then the moon, then any asteroids, and
finally the ship.
public:
~SpriteComponent();
protected:
// Texture to draw
SDL_Texture* mTexture;
int mDrawOrder;
// Width/height of texture
int mTexWidth;
int mTexHeight;
};
for ( ;
iter != mSprites.end();
++iter)
break;
mSprites.insert(iter, sprite);
mTexture = texture;
&mTexWidth, &mTexHeight);
int SDL_RenderCopy(
);
int SDL_RenderCopyEx(
);
if (mTexture)
SDL_Rect r;
// Draw
SDL_RenderCopyEx(renderer,
Animating Sprites
Most 2D games implement sprite animation using a
technique like flipbook animation: a series of
static 2D images played in rapid succession to create
an illusion of motion. Figure 2.4 illustrates what such
a series of images for different animations for a
skeleton sprite might look like.
public:
private:
// All textures in the animation
std::vector<SDL_Texture*> mAnimTextures;
float mCurrFrame;
float mAnimFPS;
};
SpriteComponent::Update(deltaTime);
if (mAnimTextures.size() > 0)
mCurrFrame -= mAnimTextures.size();
SetTexture(mAnimTextures[static_cast<int>(mCurrFrame)]);
SCROLLING BACKGROUNDS
A trick often used in 2D games is having a
background that scrolls by. This creates an
impression of a larger world, and infinite scrolling
games often use this technique. For now, we are
focusing on scrolling backgrounds, as opposed to
scrolling through an actual level. The easiest method
is to split the background into screen-sized image
segments, which are repositioned every frame to
create the illusion of scrolling.
public:
private:
struct BGTexture
SDL_Texture* mTexture;
Vector2 mOffset;
};
std::vector<BGTexture> mBGTextures;
Vector2 mScreenSize;
float mScrollSpeed;
};
int count = 0;
for (auto tex : textures)
BGTexture temp;
temp.mTexture = tex;
temp.mOffset.y = 0;
mBGTextures.emplace_back(temp);
count++;
{
SpriteComponent::Update(deltaTime);
GAME PROJECT
Unfortunately, you have not learned about enough
new topics to make a game with noticeably more
complex mechanics than the Pong clone created in
Chapter 1, “Game Programming Overview.” And it
wouldn’t be particularly interesting to just add
sprites to the previous chapter’s game. So in lieu of a
complete game, this chapter’s game project
demonstrates the new techniques covered in this
chapter. The code is available in the book’s GitHub
repository, in the Chapter02 directory. Open
Chapter02-windows.sln on Windows and
Chapter02-mac.xcodeproj on Mac. Figure 2.6
shows the game project in action. Jacob Zinman-
Jeanes created the sprite images, which are licensed
under the CC BY license.
Figure 2.6 Side-scroller project in action
public:
Ship(class Game* game);
private:
float mRightSpeed;
float mDownSpeed;
};
std::vector<SDL_Texture*> anims = {
game->GetTexture("Assets/Ship01.png"),
game->GetTexture("Assets/Ship02.png"),
game->GetTexture("Assets/Ship03.png"),
game->GetTexture("Assets/Ship04.png"),
};
asc->SetAnimTextures(anims);
Actor::UpdateActor(deltaTime);
// ...
SetPosition(pos);
}
SUMMARY
There are many ways to represent game objects. The
simplest approach is to use a monolithic hierarchy
with one base class that every game object inherits
from, but this can quickly grow out of hand. With a
component-based model, you can instead define the
functionality of a game object in terms of the
components it contains. This book uses a hybrid
approach that has a shallow hierarchy of game
objects but components that implement some
behaviors, such as drawing and movement.
ADDITIONAL READING
Jason Gregory dedicates several pages to different
types of game object models, including the model
used at Naughty Dog. Michael Dickheiser’s book
contains an article on implementing a pure
component model.
EXERCISES
The first exercise of this chapter is a thought
experiment on the different types of game object
models. In the second exercise you add functionality
to the AnimSpriteComponent class. The final
exercise involves adding support for tile maps, a
technique for generating 2D scenes from tiles.
Exercise 2.1
Consider an animal safari game where the player can
drive around in different vehicles to observe animals
in the wild. Think about the different types of
creatures, plants, and vehicles that might exist in
such a game. How might you implement these
objects in a monolithic class hierarchy object model?
Exercise 2.2
The AnimSpriteComponent class currently
supports only a single animation, composed of all the
sprites in the vector. Modify the class to support
several different animations. Define each animation
as a range of textures in the vector. Use the
CharacterXX.png files in the
Chapter02/Assets directory for testing.
Exercise 2.3
One approach to generate a 2D scene is via a tile
map. In this approach, an image file (called a tile
set) contains a series of uniformly sized tiles. Many
of these tiles combine to form a 2D scene. Tiled
(http://www.mapeditor.org), which is a great
program for generating tile sets and tile maps,
generated the tile maps for this exercise. Figure 2.7
illustrates what a portion of the tile set looks like.
In this case, the tile maps are in CSV files. Use the
MapLayerX.csv files in Chapter02/Assets, where
there are three different layers (Layer 1 being the closest
and Layer 3 the farthest). Tiles.png contains the tile
set. Each row in the CSV file contains a series of
numbers, like this:
-1,0,5,5,5,5
-1 means there is no image for that tile (so you should
render nothing for that tile). Every other number
references a specific tile from the tile set. The numbering
goes left to right and then up to down. So, in this tile set,
tile 8 is the leftmost tile on the second row.
Vector2 myVector;
myVector.x = 5;
myVector.y = 10;
Vector2 a, b;
Vector2 result = a - b;
Vector2 a;
Vector2 a, b;
Vector2 result = a + b;
Determining a Distance: Length
As mentioned earlier in this chapter, a vector
represents both a magnitude and direction. You use
two vertical bars on either side of a vector variable to
denote computing the magnitude (or length) of the
vector. For example, you write the magnitude of as
. To calculate the length of a vector, take the
square root of the sum of the squares of each
component:
Vector2 a;
warning
DIVIDE BY ZERO: If a vector has zeros for all its components, the length of
this vector is also zero. In this case, the normalization formula has a division
by zero. For floating-point variables, dividing by zero yields the error value
NaN (not a number). Once a calculation has NaNs, it’s impossible to get rid
of them because any operation on NaN also yields NaN.
A common workaround for this is to make a “safe” normalize function that
first tests whether the length of the vector if close to zero. If it is, then you
simply don’t perform the division, thus avoiding the division by zero.
Vector2 a;
Vector2 a;
}
Converting a Forward Vector to an Angle:
Arctangent
Now suppose you have a problem opposite the
problem described the previous section. Given a
forward vector, you want to convert it into an angle.
Recall that the tangent function takes in an angle and
returns the ratio between the opposite and adjacent
sides of a triangle.
shipToAsteroid.Normalize();
// Convert new forward to angle with atan2 (negate y-component for SDL)
ship->SetRotation(angle);
If the two vectors and are unit vectors, you can omit
the division because the length of each vector is one:
This is one reason it’s helpful to normalize vectors in
advance if only the direction matters.
BASIC MOVEMENT
Recall that Chapter 2’s game project overrides the
UpdateActor function for Ship (a subclass of
Actor) to make the ship move. However, movement
is such a common feature for a game that it makes
sense to instead encapsulate this behavior in a
component. This section first explores how to create
a MoveComponent class that can move actors
around the game world. You’ll leverage this class to
create asteroids that move around the screen. Next,
this section shows how to create a subclass of
MoveComponent called InputComponent that you
can hook up directly to keyboard inputs.
This way, the actor can both move forward and rotate
based on the respective speeds. To implement
MoveComponent as a subclass of Component, you first
declare the class as in Listing 3.1. It has separate speeds
to implement both forward and rotational movement, as
well as getter/setter functions for these speeds. It also
overrides the Update function, which will contain the
code that moves the actor. Note that the constructor of
MoveComponent specifies a default update order of 10.
Recall that the update order determines the order in
which the actor updates its components. Because the
default update order for other components is 100,
MoveComponent will update before most other
components do.
public:
private:
float mForwardSpeed;
};
if (!Math::NearZero(mAngularSpeed))
{
float rot = mOwner->GetRotation();
mOwner->SetRotation(rot);
if (!Math::NearZero(mForwardSpeed))
mOwner->SetPosition(pos);
Asteroid::Asteroid(Game* game)
:Actor(game)
Vector2(1024.0f, 768.0f));
SetPosition(randPos);
SetRotation(Random::GetFloatRange(0.0f, Math::TwoPi));
sc->SetTexture(game->GetTexture("Assets/Asteroid.png"));
new Asteroid(this);
if (mState == EActive)
comp->ProcessInput(keyState);
ActorInput(keyState);
mUpdatingActors = true;
actor->ProcessInput(keyState);
mUpdatingActors = false;
You set the mUpdatingActors bool to true before the
loop to handle an actor or component trying to create
another actor inside ProcessInput. In this case, you
must add to the mPendingActors vector instead of
mActors. This is the same technique used in Chapter 2
to ensure that you don’t modify mActors while iterating
over the vector.
public:
// ...
private:
float mMaxForwardSpeed;
float mMaxAngularSpeed;
int mForwardKey;
int mBackKey;
int mClockwiseKey;
int mCounterClockwiseKey;
};
if (keyState[mForwardKey])
forwardSpeed += mMaxForwardSpeed;
if (keyState[mBackKey])
forwardSpeed -= mMaxForwardSpeed;
SetForwardSpeed(forwardSpeed);
angularSpeed += mMaxAngularSpeed;
if (keyState[mCounterClockwiseKey])
angularSpeed -= mMaxAngularSpeed;
SetAngularSpeed(angularSpeed);
NEWTONIAN PHYSICS
Although the basic movement approach used so far
in this chapter works for some games, for movement
that more closely resembles the real world, you need
a physically accurate approach. Luckily, Isaac
Newton (among others) developed Newtonian
physics (or classical mechanics) to describe laws of
motion. Games commonly utilize Newtonian physics
because its laws hold if objects are not moving near
the speed of light and objects are larger than
quantum particles. Because games typically don’t
feature objects in those edge cases, Newtonian
physics works well.
// Update position
Circle-Versus-Circle Intersection
Two circles intersect with each other if and only if the
distance between their centers is less than or equal to
the sum of their radii. Figure 3.14 demonstrates this
between two circles. In the first case, the two circles
are far enough apart that they do not intersect. In
this case, the distance between their centers is
greater than the sum of the radii. However, in the
second case, where the circles do intersect, the
distance between their centers is less than the sum of
their radii.
Figure 3.14 Testing intersection between two
circles
>
note
The approach covered in this section also works for spheres because the
same principle applies.
Creating a CircleComponent Subclass
To support collision detection of actors, you can
create a CircleComponent and a method to test for
intersection between two circle components. You can
then add a CircleComponent to any actor that
needs collision.
public:
private:
float mRadius;
};
radiiSq *= radiiSq;
return distSq <= radiiSq;
mCircle->SetRadius(40.0f);
{
if (Intersect(*mCircle, *(ast->GetCircle())))
SetState(EDead);
ast->SetState(EDead);
break;
GAME PROJECT
This chapter’s game project implements a basic
version of the classic game Asteroids. The earlier
sections of this chapter cover most of the new code
used in the game project. The project implements
movement with MoveComponent and
InputComponent. The CircleComponent code
tests if the ship’s laser collides against asteroids. A
notable feature that’s missing in the game project is
that the asteroids do not collide with the ship
(though you will add that in Exercise 3.2). The game
project also does not implement Newtonian physics
(though you will add that in Exercise 3.3). The code
is available in the book’s GitHub repository, in the
Chapter03 directory. Open Chapter03-
windows.sln in Windows and Chapter03-
mac.xcodeproj on Mac.
laser->SetPosition(GetPosition());
laser->SetRotation(GetRotation());
mLaserCooldown = 0.5f;
mLaserCooldown -= deltaTime;
SUMMARY
A vector represents a magnitude and a direction. You
can use vectors for many different computations,
including creating a vector between two points (using
subtraction), calculating the distance between two
points (using subtraction and length), finding the
angle between two vectors (using the dot product),
and calculating a normal to a surface (with the cross
product).
ADDITIONAL READING
Eric Lengyel provides an in-depth look at all the
different mathematical concepts used in 3D game
programming. Aspiring graphics programmers
especially should review the more advanced material
in his book. The Gaffer on Games site, maintained by
Glenn Fielder, has several articles on the basics of
physics in games, including articles on different
forms of numeric integration and why fixing a time
step is important. Finally, Ian Millington covers how
to implement Newtonian physics in games in detail.
Exercise 3.1
1. Given the vectors and , and the
scalar value s=2, calculate the following:
(a)
(b)
(c)
2. Given the triangle in Figure 3.16 and the following
points:
A=〈−,1〉
B=〈2,4〉
C=〈3,3〉
Calculate the θ using the vector operations discussed
in this chapter.
Figure 3.16 Triangle for problem 2 of Exercise 3.1
Exercise 3.2
Currently, the ship does not collide against asteroids
in the chapter game project. Add collision for the
ship. To do so, you first need to create a
CollisionComponent in Ship and specify a
radius. Next, in Ship::UpdateActor, you need to
test against the collision of all asteroids (much the
way the laser does). If the ship collides with an
asteroid, force it to reset in the center of the screen
with a rotation of zero.
Exercise 3.3
Modify MoveComponent so that it uses Newtonian
physics. Specifically, change it to have a mass, a sum
of forces, and a velocity as member variables. Then
in Update, change the code for forward movement
so it instead calculates an acceleration from the
forces, a velocity from the acceleration, and a
position from the velocity.
ARTIFICIAL
INTELLIGENCE
From the Alert state you have two transitions: 75% and
25%. These transitions refer to the probability of the
transition. So, there’s a 75% chance that when in the
Alert state, the AI will transition to the Attack state. In
the Alarm state, the Complete transition means that after
the AI finishes triggering the alarm (perhaps by
interacting with some object in the game world), the AI
transitions into the Attack state.
enum AIState
Patrol,
Death,
Attack
};
switch (mState)
case Patrol:
UpdatePatrol(deltaTime);
break;
case Death:
UpdateDeath(deltaTime);
break;
case Attack:
UpdateAttack(deltaTime);
break;
default:
// Invalid
break;
}
}
// ...
mState = newState;
// ...
States as Classes
An alternative approach to the one just described is
to use classes to represent each state. First, define a
base class for all states called AIState:
class AIState
public:
:mOwner(owner)
{ }
// State-specific behavior
virtual void Update(float deltaTime) = 0;
protected:
};
public:
private:
};
{
mStateMap.emplace(state->GetName(), state);
if (mCurrentState)
mCurrentState->Update(deltaTime);
if (mCurrentState)
mCurrentState->OnExit();
if (iter != mStateMap.end())
mCurrentState = iter->second;
mCurrentState->OnEnter();
else
{
SDL_Log("Could not find AIState %s in state map", name.c_str());
mCurrentState = nullptr;
public:
{ return "Patrol"; }
};
You then implement any special behaviors in Update,
OnEnter, and OnExit. Suppose you want AIPatrol to
change to the AIDeath state when the character dies. To
initiate the transition, you need to call ChangeState on
the owning component, passing in the name of the new
state:
// ...
if (dead)
mOwner->ChangeState("Death");
// Make an AIComponent
aic->RegisterState(new AIPatrol(aic));
aic->RegisterState(new AIDeath(aic));
aic->RegisterState(new AIAttack(aic));
aic->ChangeState("Patrol");
PATHFINDING
A pathfinding algorithm finds a path between two
points, avoiding any obstacles in the way. The
complexity of this problem stems from the fact that
there might be a large set of paths between two
points, but only a small number of these paths are
the shortest. For example, Figure 4.3 shows two
potential routes between points A and B. An AI
traveling along the solid path is not particularly
intelligent because the dashed path is shorter. Thus,
you need a method to efficiently search through all
the possible paths to find one with the shortest
distance.
Graphs
Before you can solve the pathfinding problem, you
first need a way to represent the parts of the game
world that the AI can path through. A popular choice
is the graph data structure. A graph contains a set of
nodes (also called vertices). These nodes connect to
each other via edges. These edges can be
undirected, meaning they are traversable in both
directions, or directed, meaning they are
traversable in only one direction. You might use a
directed edge for a case where the AI can jump down
from a platform but can’t jump back up. You could
represent this connection with a directed edge from
the platform to the ground.
struct GraphNode
std::vector<GraphNode*> mAdjacent;
};
struct Graph
{
// A graph contains nodes
std::vector<GraphNode*> mNodes;
};
struct WeightedEdge
float mWeight;
};
struct WeightedGraphNode
std::vector<WeightedEdge*> mEdges;
};
// (A WeightedGraph has WeightedGraphNodes)
Breadth-First Search
Suppose a game takes place in a maze designed in a
square grid. The game only allows movement in the
four cardinal directions. Because each move in the
maze is uniform in length, an unweighted graph can
represent this maze. Figure 4.5 shows a sample maze
and its corresponding graph.
Figure 4.5 A maze on a square grid and its
corresponding graph
using NodeToParentMap =
Even if no path exists between the start and the goal, the
loop will eventually terminate. This is because the
algorithm checks all nodes that are reachable from start.
Once all possibilities are exhausted, the queue becomes
empty and the loop ends.
// Nodes to consider
std::queue<const GraphNode*> q;
q.emplace(start);
while (!q.empty())
// Dequeue a node
q.pop();
if (current == goal)
pathFound = true;
break;
// Enqueue adjacent nodes that aren't already in the queue
outMap[node] = current;
q.emplace(node);
return pathFound;
NodeToParentMap map;
BFS always finds a path between the start and goal nodes
if one exists. But for weighted graphs, BFS doesn’t
guarantee to find the shortest path. This is because BFS
doesn’t look at the weight of the edges at all; every edge
traversal is equivalent. In Figure 4.6, the dashed path has
the shortest distance, but BFS returns the solid path as it
requires only two moves.
Figure 4.6 BFS finds the solid path even though the
dashed path is shorter
Heuristics
Many search algorithms rely on a heuristic, which
is a function that approximates an expected result. In
pathfinding, the heuristic is the estimated cost from
a given node to the goal node. A heuristic can help
you more quickly find a path. For example, on each
iteration of BFS, you dequeue the next node in the
queue, even if that node sends you in a direction
pointing away from the goal. With a heuristic, you
can estimate how close you think a specific node is to
the goal and then choose to look at the “closer” nodes
first. This way, the pathfinding algorithm is likely to
terminate with fewer iterations.
note
Although GBFS does not guarantee optimality, it’s useful to understand
because it requires only a couple modifications to become A*. The A*
algorithm does guarantee the shortest path if the heuristic is admissible. So
before moving on to A*, it’s important to understand the GBFS
implementation.
Selecting data structures for the open set and the closed
set presents an interesting dilemma. For the open set,
the two operations you need are removing the node with
the lowest cost and testing for membership. The closed
set only needs a membership test. To speed up the
membership test, you can simply use Booleans in scratch
data to track if a specific node is a member of the open
set or the closed set. And because the closed set just
needed this membership test, you don’t use an actual
collection for the closed set.
struct GBFSScratch
};
using GBFSMap =
outMap[current].mInClosedSet = true;
Next, you enter the main loop of GBFS. This main loop
does several things. First, it looks at all nodes adjacent to
the current node. It only considers nodes that aren’t
already in the closed set. These nodes have their parent
edge set to the edge incoming from the current node. For
nodes not already in the open set, the code computes the
heuristic (from the node to the goal) and adds the node
to the open set:
Click here to view code image
do
if (!data.mInClosedSet)
data.mParentEdge = edge;
if (!data.mInOpenSet)
// Compute the heuristic for this node, and add to open set
data.mInOpenSet = true;
openSet.emplace_back(edge->mTo);
}
if (openSet.empty())
});
current = *iter;
openSet.erase(iter);
outMap[current].mInOpenSet = false;
outMap[current].mInClosedSet = true;
Keep in mind that just because a node in the open set has
the lowest heuristic cost doesn’t mean it’s on the optimal
path. For example, in Figure 4.9(b), the node C2 is not
on the optimal path. Unfortunately, the GBFS algorithm
still selects C2 for its path. Clearly, you need to do some
refinement to fix this issue.
Listing 4.3 shows the complete code for the greedy best-
first search function.
outMap[current].mInClosedSet = true;
do
if (!data.mInClosedSet)
data.mParentEdge = edge;
if (!data.mInOpenSet)
// Compute the heuristic for this node, and add to open set
data.mInOpenSet = true;
openSet.emplace_back(edge->mTo);
if (openSet.empty())
{ break; }
});
current = *iter;
openSet.erase(iter);
outMap[current].mInOpenSet = false;
outMap[current].mInClosedSet = true;
A* Search
The downside of GBFS is that it can’t guarantee an
optimal path. Luckily, with some modifications to
GBFS, you can transform it into the A* search
(pronounced “A-star”). A* adds a path-cost
component, which is the actual cost from the start
node to a given node. The notation g(x) denotes the
path-cost of a node x. When selecting a new current
node, A* selects the node with the lowest f(x) value,
which is just the sum of the g(x) path-cost and the
h(x) heuristic for that node:
if (!data.mInClosedSet)
if (!data.mInOpenSet)
data.mParentEdge = edge;
data.mHeuristic = ComputeHeuristic(neighbor, goal);
data.mActualFromStart = outMap[current].mActualFromStart +
edge->mWeight;
data.mInOpenSet = true;
openSet.emplace_back(neighbor);
else
data.mParentEdge = edge;
data.mActualFromStart = newG;
}
note
Optimizing A* to run as efficiently as possible is a complex topic. One
consideration is what happens if there are a lot of ties in the open set. This is
bound to happen in a square grid, especially if you use the Manhattan
heuristic. If there are too many ties, when it’s time to select a node, you have
a high probability of selecting one that doesn’t end up on the path. This
ultimately means you need to explore more nodes in the graph, which makes
A* run more slowly.
One way to help eliminate ties is to add a weight to the heuristic function,
such as arbitrarily multiplying the heuristic by 0.75. This gives more weight to
the path-cost g(x) function over the heuristic h(x) function, which means
you’re more likely to explore nodes further from the start node.
From an efficiency standpoint, A* actually is a poor choice for grid-based
pathfinding. Other pathfinding algorithms are far more efficient for grids. One
of them is the JPS+ algorithm, outlined in Steve Rabin’s Game AI Pro 2 (see
the “Additional Reading” section). However, A* works on any graph, whereas
JPS+ works only on grids.
Dijkstra’s Algorithm
Let’s return to the maze example but now suppose
that the maze has multiple pieces of cheese in it, and
you want the mouse to move toward the closest
cheese. A heuristic could approximate which cheese
is closest, and A* could find a path to that cheese.
But there’s a chance that the cheese you select with
the heuristic isn’t actually the closest because the
heuristic is only an estimate.
Following a Path
Once the pathfinding algorithm generates a path, the
AI needs to follow it. You can abstract the path as a
sequence of points. The AI then just moves from
point to point in this path. You can implement this in
a subclass of MoveComponent called
NavComponent. Because MoveComponent can
already move an actor forward, NavComponent only
needs to rotate the actor to face the correct direction
as the actor moves along the path.
First, the TurnTo function in NavComponent rotates
the actor to face a point:
mOwner->SetRotation(angle);
mNextPoint = GetNextPoint();
TurnTo(mNextPoint);
MoveComponent::Update(deltaTime);
GAME TREES
Games such as tic-tac-toe or chess are very different
from most real-time games. First, the game has two
players, and each player alternates taking a turn.
Second, the game is adversarial, meaning that the
two players are playing against each other. The AI
needs for these types of games are very different
from those of real-time games. These types of games
require some representation of the overall game
state, and this state informs the AI’s decisions. One
approach is to use a tree called a game tree. In a
game tree, the root node represents the current state
of the game. Each edge represents a move in the
game and leads to a new game state.
struct GameState
SquareState mBoard[3][3];
};
struct GTNode
// Children nodes
std::vector<GTNode*> mChildren;
GameState mState;
};
To generate a complete game tree, you set the root node
to the current game state and create children for each
possible first move. Then you repeat this process for each
node in the first level and continue until all moves are
exhausted.
Minimax
The minimax algorithm evaluates a two-player
game tree to determine the best move for the current
player. Minimax assumes that each player will make
the choice most beneficial to herself. Because scores
are from the perspective of the max player, this
means the max player tries to maximize her score,
while the min player strives to minimize the score of
the max player.
if (node->mChildren.empty())
return GetScore(node->mState);
}
return maxValue;
if (node->mChildren.empty())
return GetScore(node->mState);
return minValue;
// Find the subtree with the maximum value, and save the choice
float v = MinPlayer(child);
if (v > maxValue)
maxValue = v;
choice = child;
return choice;
if (depth == 0 || state->IsTerminal())
return state->GetScore();
return maxValue;
Alpha-Beta Pruning
Alpha-beta pruning is an optimization of the
minimax algorithm that, on average, reduces the
number of nodes evaluated. In practice, this means
it’s possible to increase the maximum depth explored
without increasing the computation time.
if (v > maxValue)
{
maxValue = v;
choice = child;
return choice;
float beta)
if (depth == 0 || node->IsTerminal())
{
return node->GetScore();
maxValue = std::max(maxValue,
return maxValue;
float beta)
if (depth == 0 || node->IsTerminal())
return node->GetScore();
minValue = std::min(minValue,
}
return minValue;
GAME PROJECT
This chapter’s game project, shown in Figure 4.14, is
a tower defense game. In this style of game, the
enemies try to move from the start tile on the left to
an end tile on the right. Initially, the enemies move
in a straight line from left to right. However, the
player can build towers on squares in the grid, even
where the path is, which causes the path to redirect
around these towers as needed. The code is available
in the book’s GitHub repository, in the Chapter04
directory. Open Chapter04-windows.sln on
Windows and Chapter04-mac.xcodeproj on
Mac.
SUMMARY
Artificial intelligence is a deep topic with many
different sub-areas. Using state machines is an
effective way to give behaviors to AI-controlled
characters in a game. While a switch is the simplest
implementation of a state machine, the state design
pattern adds flexibility by making each state a
separate class.
ADDITIONAL READING
Many resources cover AI techniques. Stuart Russell
and Peter Norvig’s book is a popular AI text that
covers many techniques, though only some are
applicable to games. Mat Buckland’s book, although
dated, covers many useful game AI topics. Steve
Rabin’s Game AI Pro series has many interesting
articles written by different game AI developers.
EXERCISES
The two exercises for this chapter implement
techniques not used in this chapter’s game project.
The first looks at state machines, and the second uses
alpha-beta pruning for a four-in-a-row game.
Exercise 4.1
Given this chapter’s game project code, update either
the Enemy or Tower class (or both!) to use an AI
state machine. First, consider which behaviors the AI
should have and design the state machine graph.
Next, use the provided AIComponent and AIState
base classes to implement these behaviors.
Exercise 4.2
In a four-in-a-row game, players have a vertical grid
of six rows and seven columns. The two players take
turns putting a piece at the top of a column, and then
the piece slides down to the lowest free position in
the column. The game continues until one player gets
four in a row horizontally, vertically, or diagonally.
OPENGL
warning
OLDER VERSIONS OF OPENGL ARE VERY DIFFERENT: Be careful when
consulting any online OpenGL references, as many refer to older versions of
OpenGL.
SDL_GL_SetAttribute(
);
note
There are three main profiles supported by OpenGL: core, compatibility, and
ES. The core profile is the recommended default profile for a desktop
environment. The only difference between the core and compatibility profiles
is that the compatibility profile allows the program to call OpenGL functions
that are deprecated (no longer intended for use). The OpenGL ES profile is
for mobile development.
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK,
SDL_GL_CONTEXT_PROFILE_CORE);
// Specify version 3.3
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_RED_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_GREEN_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_BLUE_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_ALPHA_SIZE, 8);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_GLContext mContext;
mContext = SDL_GL_CreateContext(mWindow);
SDL_GL_DeleteContext(mContext);
// Initialize GLEW
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK)
{
SDL_Log("Failed to initialize GLEW.");
return false;
// so clear it
glGetError();
note
Some old PC machines with integrated graphics (from 2012 or earlier) may
have issues running OpenGL version 3.3. In this case, you can try two
things: updating to newer graphics drivers or requesting OpenGL version 3.1.
Rendering a Frame
You now need to convert the clear, draw scene, and
swap buffers process in Game::GenerateOutput
to use OpenGL functions:
glClear(GL_COLOR_BUFFER_BIT);
SDL_GL_SwapWindow(mWindow);
This code first sets the clear color to 86% red, 86% green,
86% blue, and 100% alpha, which yields a gray color. The
glClear call with the GL_COLOR_BUFFER_BIT
parameter clears the color buffer to the specified color.
Finally, the SDL_GL_SwapWindow call swaps the front
buffer and back buffer. At this point, running the game
yields a gray screen because you aren’t drawing the
SpriteComponents yet.
TRIANGLE BASICS
The graphical needs of 2D and 3D games couldn’t
seem more different. As discussed in Chapter 2,
“Game Objects and 2D Graphics,” most 2D games
use sprites for their 2D characters. On the other
hand, a 3D game features a simulated 3D
environment that you somehow flatten into a 2D
image that you show onscreen.
Early 2D games could simply copy sprite images into the
desired locations of the color buffer. This process, called
blitting, was efficient on sprite-based consoles such as
the Nintendo Entertainment System (NES). However,
modern graphical hardware is inefficient at blitting but is
very efficient at polygonal rendering. Because of this,
nearly all modern games, whether 2D or 3D, ultimately
use polygons for their graphical needs.
Why Polygons?
There are many ways a computer could simulate a
3D environment. Polygons are popular in games for a
multitude of reasons. Compared to other 3D graphics
techniques, polygons do not require as many
calculations at runtime. Furthermore, polygons are
scalable: A game running on less-powerful hardware
could simply use 3D models with fewer polygons.
And, importantly, you can represent most 3D objects
with polygons.
float vertices[] = {
};
The solution to this issue has two parts. First, you create
a vertex buffer that contains only the unique
coordinates used by the 3D geometry. Then, to specify
the vertices of each triangle, you index into this vertex
buffer (much like indexing into an array). The aptly
named index buffer contains the indices for each
individual triangle, in sets of three. For this example’s
sample square, you’d need the following vertex and index
buffers:
float vertexBuffer[] = {
};
0, 1, 2,
2, 3, 0
};
class VertexArray
{
public:
~VertexArray();
void SetActive();
private:
};
glGenVertexArrays(1, &mVertexArray);
glBindVertexArray(mVertexArray);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glBufferData(
);
glGenBuffers(1, &mIndexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndexBuffer);
glBufferData(
indices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(
);
VertexArray::~VertexArray()
glDeleteBuffers(1, &mVertexBuffer);
glDeleteBuffers(1, &mIndexBuffer);
glDeleteVertexArrays(1, &mVertexArray);
glBindVertexArray(mVertexArray);
The vertex and index buffer variables here are the arrays
for the sprite quad. In this case, there are 4 vertices in
the vertex buffer and 6 indices in the index buffer
(corresponding to the 2 triangles in the quad). You will
use this member variable later in this chapter to draw
sprites, as all sprites will ultimately use the same
vertices.
SHADERS
In a modern graphics pipeline, you don’t simply feed
in the vertex/index buffers and have triangles draw.
Instead, you specify how you want to draw the
vertices. For example, should the triangles be a fixed
color, or should they use a color from a texture? Do
you want to perform lighting calculations for every
pixel you draw?
note
Shader programs do not use the C++ programming language. This book
uses the GLSL programming language for shader programs. Although GLSL
superficially looks like C, there are many semantics specific to GLSL. Rather
than present all the details of GLSL at once, this book introduces the
concepts as needed.
Vertex Shaders
A vertex shader program runs once for every
vertex of every triangle drawn. The vertex shader
receives the vertex attribute data as an input. The
vertex shader can then modify these vertex attributes
as it sees fit. While it may seem unclear why you’d
want to modify vertex attributes, it’ll become more
apparent as this chapter continues.
Fragment Shaders
After the vertices of a triangle have gone through the
vertex shader, OpenGL must determine which pixels
in the color buffer correspond to the triangle. This
process of converting the triangle into pixels is
rasterization. There are many different
rasterization algorithms, but today’s graphics
hardware does rasterization for us.
#version 330
in vec3 inPosition;
void main()
#version 330
in vec3 inPosition;
void main()
#version 330
void main()
// Set to blue
}
Loading Shaders
Once you have the separate shader files written, you
must load in these shaders in the game’s C++ code to
let OpenGL know about them. At a high level, you
need to follow these steps:
class Shader
public:
Shader();
~Shader();
private:
bool IsValidProgram();
GLuint mVertexShader;
GLuint mFragShader;
GLuint mShaderProgram;
};
The Function
CompileShader takes three parameters: the name
of the shader file to compile, the type of shader, and
a reference parameter to store the ID of the shader.
The return value is a bool that denotes whether
CompileShader succeeded.
GLenum shaderType,
GLuint& outShader)
// Open file
std::ifstream shaderFile(fileName);
if (shaderFile.is_open())
std::stringstream sstream;
outShader = glCreateShader(shaderType);
if (!IsCompiled(outShader))
return false;
else
return false;
return true;
The Function
The IsCompiled function, shown in Listing 5.7,
validates whether a shader object compiled, and if it
didn’t, it outputs the compilation error message. This
way, you can get some information about why a
shader fails to compile.
Listing 5.7 Shader::IsCompiled Implementation
Click here to view code image
GLint status;
if (status != GL_TRUE)
char buffer[512];
memset(buffer, 0, 512);
return false;
return true;
The Function
The Load function in Listing 5.8 takes in the
filenames of both the vertex and fragment shaders
and then tries to compile and link these shaders
together.
return false;
}
// Now create a shader program that
mShaderProgram = glCreateProgram();
glAttachShader(mShaderProgram, mVertexShader);
glAttachShader(mShaderProgram, mFragShader);
glLinkProgram(mShaderProgram);
if (!IsValidProgram())
return false;
return true;
The Function
The code for IsValidProgram is very similar to the
code for IsCompiled. There are only two
differences. First, instead of calling
glGetShaderiv, call glGetProgramiv:
glUseProgram(mShaderProgram);
void Shader::Unload()
glDeleteProgram(mShaderProgram);
glDeleteShader(mVertexShader);
glDeleteShader(mFragShader);
bool Game::LoadShaders()
if (!mSpriteShader->Load("Shaders/Basic.vert", "Shaders/Basic.frag"))
return false;
mSpriteShader->SetActive();
glDrawElements(
);
void Game::GenerateOutput()
glClear(GL_COLOR_BUFFER_BIT);
// Set sprite shader and vertex array objects active
mSpriteShader ->SetActive();
mSpriteVerts ->SetActive();
sprite ->Draw(mSpriteShader);
SDL_GL_SwapWindow(mWindow);
return true;
What happens when you run this code now? Well, first,
the fragment shader only writes out a blue color. So it’s
reasonable to expect that you’d see blue squares for each
SpriteComponent. However, there’s another issue: For
every sprite, you use the same sprite verts. These sprite
verts define a unit square in normalized device
coordinates. This means that for every
SpriteComponent, you merely draw the same unit
square in NDC. Thus, if you run the game right now,
you’ll see only a gray background and a rectangle, as in
Figure 5.3.
TRANSFORMATION BASICS
Suppose a game has 10 asteroids moving around.
You could represent these 10 asteroids individually
with different vertex array objects. However, you
need these asteroids to show up in different locations
onscreen. This means the triangles you draw for each
asteroid need different normalized device
coordinates.
Object Space
When you create a 3D object (such as in a 3D
modeling program), you generally don’t express
vertex positions in normalized device coordinates.
Instead, the positions are relative to an arbitrary
origin of the object itself. This origin is often in the
center of the object, but it does not have to be. This
coordinate space relative to the object itself is object
space, or model space.
World Space
To solve the problem with different objects having
different object space coordinates, you first define a
coordinate space for the game world itself. This
coordinate space, called world space, has its own
origin and basis vectors. For the game in the office
building, the origin of world space might be in the
center of the building on the ground floor.
When you draw each instance of the desk, you use the
same vertex array object for each desk. However, each
instance now needs some additional information,
specifying how you want to transform the object space
coordinates into world space. You can send this extra
data to the vertex shader when drawing an instance,
which allows the vertex shader to adjust the vertex
positions as needed. Of course, the graphics hardware
ultimately needs the coordinates in NDC to draw them,
so you still have an additional step after transforming the
vertices into world space. For now, let’s look at how to
transform vertices from their object space into world
space.
note
The 2D coordinate system used here is different from the SDL coordinate
system where +y is down! This means the code for Actor::GetForward
no longer negates the y component. Furthermore, if you use atan2 for any
calculations, you no longer negate the first parameter.
Translation
Translation takes a point and translates, or moves,
it by an offset. Given the point (x, y), you can
translate it by the offset (a, b) by using the following
equations:
Scale
When applied to each vertex in a triangle, scale
increases or decreases the size of the triangle. In a
uniform scale, you scale each component by the
same scale factor, s:
Rotation
Recall the discussion of the unit circle from Chapter
4, “Vectors and Basic Physics.” The unit circle begins
at the point (1, 0). A rotation of 90˚, or radians, is
counterclockwise to the point (0, 1), a rotation of
180˚, or π radians, is the point (–1, 0), and so on.
This is technically a rotation about the z-axis, even
though you don’t draw the z-axis in a typical unit
circle diagram.
Combining Transformations
Although the preceding equations apply each
transformation independently, it’s common to
require multiple transformations on the same vertex.
For example, you might want to both translate and
rotate a quad. It’s important to combine these
transformations in the correct order.
MATRICES AND
TRANSFORMATIONS
A matrix is a grid of values, with 2×2 columns. For
example, you could write a 2×2 matrix as follows,
with a through d representing individual values in
the matrix:
Scale Matrix
You can use a 2×2 scale matrix to apply the scale
transformation:
Combining Transformations
As mentioned earlier, you can combine multiple
transform matrices by multiplying them together.
However, you can’t multiply a 2×2 matrix with a 3×3
matrix. Thus, you must represent the scale and
rotation transforms with 3×3 matrices that work
with homogeneous coordinates:
Matrix4 mWorldTransform;
bool mRecomputeWorldTransform;
void Actor::ComputeWorldTransform()
if (mRecomputeWorldTransform)
mRecomputeWorldTransform = false;
mWorldTransform = Matrix4::CreateScale(mScale);
mWorldTransform *= Matrix4::CreateRotationZ(mRotation);
mWorldTransform *= Matrix4::CreateTranslation(
if (mState == EActive)
ComputeWorldTransform();
UpdateComponents(deltaTime);
UpdateActor(deltaTime);
ComputeWorldTransform();
pending ->ComputeWorldTransform();
mActors.emplace_back(pending);
void Actor::ComputeWorldTransform()
if (mRecomputeWorldTransform)
mRecomputeWorldTransform = false;
mWorldTransform = Matrix4::CreateScale(mScale);
mWorldTransform *= Matrix4::CreateRotationZ(mRotation);
mWorldTransform *= Matrix4::CreateTranslation(
comp ->OnUpdateWorldTransform();
In this case, you need two uniforms for the two different
matrices. You can declare these uniforms as follows:
#version 330
// Vertex attributes
in vec3 inPosition;
void main()
{
return false;
Now that you have uniforms in the vertex shader for the
world transform and view-projection matrices, you need
a way to set these uniforms from C++ code. OpenGL
provides functions to set uniform variables in the active
shader program. It makes sense to add wrappers for
these functions to the Shader class. For now, you can
add a function called SetMatrixUniform, shown in
Listing 5.12, to Shader.
glUniformMatrix4fv(
loc, // Uniform ID
);
note
OpenGL has a newer approach to setting uniforms, called uniform buffer
objects (abbreviated UBOs). With UBOs, you can group together multiple
uniforms in the shader and send them all at once. For shader programs with
many uniforms, this generally is more efficient than individually setting each
uniform’s value.
With uniform buffer objects, you can split up uniforms into multiple groups.
For example, you may have a group for uniforms that update once per frame
and uniforms that update once per object. The view-projection won’t change
more than once per frame, while every actor will have a different world
transform matrix. This way, you can update all per-frame uniforms in just one
function call at the start of the frame. Likewise, you can update all per-object
uniforms separately for each object. To implement this, you must change
how you declare uniforms in the shader and how you mirror that data in the
C++ code.
However, at this writing, some hardware still has spotty support for UBOs.
Specifically, the integrated graphics chips of some laptops don’t fully support
uniform buffer objects. On other hardware, UBOs may even run more slowly
than uniforms set the old way. Because of this, this book does not use
uniform buffer objects. However, the concept of buffer objects is prevalent in
other graphics APIs, such as DirectX 11 and higher.
Now that you have a way to set the vertex shader’s matrix
uniforms, you need to set them. Because the simple view-
projection won’t change throughout the course of the
program, you only need to set it once. However, you need
to set the world transform matrix once for each sprite
component you draw because each sprite component
draws with the world transform matrix of its owning
actor.
mShader.SetMatrixUniform("uViewProj", viewProj);
static_cast<float>(mTexWidth),
static_cast<float>(mTexHeight),
1.0f);
// Draw quad
TEXTURE MAPPING
Texture mapping is a technique for rendering a
texture (image) on the face of a triangle. It allows
you to use colors from a texture when drawing a
triangle instead of using just a solid color.
public:
Texture();
~Texture();
void Unload();
void SetActive();
private:
int mWidth;
int mHeight;
};
The implementation of Load contains the bulk of the
Texture class code. You first declare a local variable to
store the number of channels and then call
SOIL_load_image to load in the texture:
int channels = 0;
);
format = GL_RGBA;
glGenTextures(1, &mTextureID);
glBindTexture(GL_TEXTURE_2D, mTextureID);
glTexImage2D(
);
SOIL_free_image_data(image);
int channels = 0;
if (image == nullptr)
fileName.c_str(), SOIL_last_result());
return false;
if (channels == 4)
format = GL_RGBA;
}
glGenTextures(1, &mTextureID);
glBindTexture(GL_TEXTURE_2D, mTextureID);
GL_UNSIGNED_BYTE, image);
SOIL_free_image_data(image);
return true;
void Texture::Unload()
{
glDeleteTextures(1, &mTextureID);
void Texture::SetActive()
glBindTexture(GL_TEXTURE_2D, mTextureID);
// In SpriteComponent::Draw...
mTexture ->SetActive();
// Draw quad
glDrawElements(GL_TRIANGLES, 6,
GL_UNSIGNED_INT, nullptr);
float vertices[] = {
};
verts, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,
glEnableVertexAttribArray(1);
glVertexAttribPointer(
);
tip
If you use a struct in C++ code to represent the format of the vertices, you
can use the offsetof macro to determine the offsets to a vertex attribute
rather than manually computing them. This is especially helpful if there is
padding between vertex elements.
The Shader
There previously was only one vertex attribute, so
you could just declare position as an in variable, and
GLSL knew which vertex attribute it corresponded
to. However, now that there are multiple vertex
attributes, you must specify which attribute slot
corresponds to which in variable. This changes the
variable declarations to the following:
fragTexCoord = inTexCoord;
#version 330
void main()
fragTexCoord = inTexCoord;
in vec4 fragTexCoord;
This samples the color from the texture, using the texture
coordinates received from the vertex shader (after the
coordinates interpolate across the face of the triangle).
#version 330
in vec2 fragTexCoord;
// Output color
void main()
Alpha Blending
Alpha blending determines how to blend pixels
with transparency (an alpha channel less than 1).
Alpha blending uses an equation in the following
form to calculate the pixel color:
For example, suppose you have 8 bits per color, and the
color buffer at some pixel is red. In this case, this is the
destination color:
Next, say that you want to draw a pixel that’s blue; this is
the source color:
glEnable(GL_BLEND);
glBlendFunc(
);
GAME PROJECT
This chapter’s game project demonstrates all the
code to convert the game code from SDL graphics to
OpenGL. It converts the Asteroids game project from
Chapter 3 to instead use OpenGL. The controls are
the same as in Chapter 3: WASD to move the ship and
spacebar to fire lasers. The code is available in the
book’s GitHub repository, in the Chapter05
directory. Open Chapter05-windows.sln on
Windows and Chapter05-mac.xcodeproj on
Mac.
SUMMARY
Because graphics hardware is optimized for
polygons, 2D and 3D games internally use polygons
(usually triangles) to represent all graphical objects
in the world. Even a 2D sprite that you might think of
as an image is a rectangle with a texture mapped to
it. To send triangles to the graphics hardware, you
must declare the attributes of each vertex and create
a vertex and an index buffer.
ADDITIONAL READING
There are many excellent online references for
aspiring OpenGL developers. The official OpenGL
reference pages are useful for finding out what the
parameters for each function do. Of all the OpenGL
tutorial sites, one of the best is Learn OpenGL. For
an extensive look at the graphical techniques used in
game development, Real-Time Rendering by
Thomas Akenine-Moller et al. is a definitive
reference.
EXERCISES
The exercises for this chapter involve making some
modifications to this chapter’s game project to gain
more experience using various OpenGL functions.
Exercise 5.1
Modify the background clear color so that it
smoothly changes between colors. For example,
starting from black, smoothly change over several
seconds to blue. Then select another color (such as
red) and smoothly change over several seconds to
this other color. Think about how you can use
deltaTime in Game::Update to facilitate this
smooth transition.
Exercise 5.2
Modify the sprite vertices so that each vertex also has
an RGB color associated with it. This is known as a
vertex color. Update the vertex shader to take the
vertex color as an input and pass it to the fragment
shader. Then change the fragment shader so that
rather than simply drawing the color sampled from
the texture, it averages the color between the vertex
color and texture color.
CHAPTER 6
3D GRAPHICS
If you take your left hand and pretend that the thumb is
up, the index finger is forward, and the middle finger is
right, you see that it matches the coordinate system in
Figure 6.1 perfectly. Thus, this type of coordinate system
is a left-handed coordinate system. It would be
right-handed if +y were instead to the left.
Euler Angles
Representing rotations in 3D is more complex than
in 2D. Previously, an actor only needed one float for
rotation. This represented a rotation about the z-axis
because that’s the only rotation possible in 2D. But in
3D, it’s valid to rotate about any of the three
coordinate axes. One approach for 3D rotations is
Euler angles, where there are three angles (yaw,
pitch, and roll) representing rotation about each axis.
The names yaw, pitch, and roll come from airplane
terminology. Yaw is a rotation about the up axis,
pitch is a rotation about the side axis, and roll is a
rotation about the forward axis. In the coordinate
from Figure 6.1, yaw is rotation about +z, pitch is
rotation about +y, and roll is rotation about +x.
Quaternions
Many games use quaternions instead of Euler
angles. The formal mathematical definition of a
quaternion is complex. For the purposes of this book,
think of a quaternion as a method to represent a
rotation about an arbitrary axis (not just x, y, or z).
Basic Definition
3D graphics use unit quaternions, which are
quaternions with a magnitude of one. A quaternion
has both a vector and a scalar component. This book
uses the following notation to represent a quaternion
as its vector and scalar components:
The calculation of the vector and scalar components
depends on the normalized axis of rotation, â, and the
angle of rotation, θ:
Quaternions in Code
As with vectors and matrices, for quaternions the
custom Math.h header file has a Quaternion class.
Listing 6.1 shows the most useful functions. Because
the order of multiplication of quaternions often
confuses game programmers (for example, to rotate
p followed by q, you multiply q by p), instead of using
the multiplication operator, the Math.h library
declares a Concatenate function. This function
simply takes in the quaternions in the order many
expect—so the rotation “p followed by q” is as
follows:
class Quaternion
public:
// Functions/data omitted
// ...
// v = (0, 0, 0); s = 1
};
// In Matrix4...
// In Vector3...
Vector3 mPosition;
Quaternion mRotation;
float mScale;
With this new transform representation, the code for
calculating the world transform matrix in
ComputeWorldTransform changes to this:
mWorldTransform = Matrix4::CreateScale(mScale);
mWorldTransform *= Matrix4::CreateFromQuaternion(mRotation);
mWorldTransform *= Matrix4::CreateTranslation(mPosition);
if (!Math::NearZero(mAngularSpeed))
mOwner->SetRotation(rot);
}
// Updating position based on forward speed stays the same
// ...
LOADING 3D MODELS
For sprite-based games, every sprite draws with a
single quad, which means it’s okay to hard-code the
vertex and index buffers. However, for a full 3D
game, there are a lot of other triangular meshes. For
example, a first-person shooter has enemy meshes,
weapon meshes, character meshes, meshes for the
environment, and so on. An artist creates these
models in a 3D modeling program such as Blender or
Autodesk Maya. The game then needs code to load
these models into vertex and index buffers.
"version":1,
"vertexformat":"PosNormTex",
"shader":"BasicMesh",
"textures":[
"Assets/Cube.png"
],
"vertices":[
[1.0,1.0,-1.0,0.57,0.57,-0.57,0.66,0.33],
[1.0,-1.0,-1.0,0.57,-0.57,-0.57,0.66,0.0],
[-1.0,-1.0,-1.0,-0.57,-0.57,-0.57,1.0,0.33],
[-1.0,1.0,-1.0,-0.57,0.57,-0.57,0.66,0.66],
[1.0,0.99,1.0,0.57,0.57,0.57,0.33,0.33],
[0.99,-1.0,1.0,0.57,-0.57,0.57,0.0,0.0],
[-1.0,-1.0,1.0,-0.57,-0.57,0.57,0.66,0.33],
[-1.0,1.0,1.0,-0.57,0.57,0.57,0.33,0.66]
],
"indices":[
[1,3,0],
[7,5,4],
[4,1,0],
[5,2,1],
[2,7,3],
[0,7,4],
[1,2,3],
[7,6,5],
[4,5,1],
[5,6,2],
[2,6,7],
[0,3,7]
Every vertex array will use the new vertex layout, so the
constructor for VertexArray changes to specify this
new layout. Most notably, the size of each vertex is now
eight floats, and you add an attribute for the normal:
Click here to view code image
// Position is 3 floats
glEnableVertexAttribArray(0);
// Normal is 3 floats
glEnableVertexAttribArray(1);
reinterpret_cast<void*>(sizeof(float) * 3));
glEnableVertexAttribArray(2);
class Mesh
public:
Mesh();
~Mesh();
// Load/unload mesh
void Unload();
private:
std:string mShaderName;
float mRadius;
};
DRAWING 3D MESHES
Once 3D meshes are loading, the next step is to draw
them. However, there are a lot of topics to touch on
before the 3D meshes start showing up.
class Renderer
public:
Renderer();
~Renderer();
void Shutdown();
void UnloadData();
void Draw();
private:
bool LoadShaders();
void CreateSpriteVerts();
// ...
};
View Matrix
The view matrix represents the position and
orientation of the camera, or “eye” in the world.
Chapter 9, “Cameras,” covers several different
implementations of cameras, but for now let’s keep it
simple. At a minimum, a look-at matrix represents
the position and orientation of the camera.
// Location of camera
mCameraActor->GetForward() * 10.0f;
Projection Matrix
The projection matrix determines how the 3D
world flattens into the 2D world drawn onscreen.
Two types of projection matrices are common in 3D
games: orthographic and perspective.
note
We omit the derivation of the orthographic and perspective matrices here.
Both types of protection matrices have helper functions
in the Math.h library. You can use
Matrix4::CreateOrtho for an orthographic matrix
and Matrix4::CreatePerspectiveFOV for a
perspective matrix.
Z-Buffering
Z-buffering (or depth buffering) uses an additional
memory buffer during the rendering process. This
buffer, known as the z-buffer (or depth buffer),
stores data for each pixel in the scene, much like the
color buffer. But while the color buffer stores color
information, the z-buffer stores the distance from the
camera, or depth, at each pixel. Collectively, the set
of buffers that graphically represent the frame
(including the color buffer, z-buffer, and others) is
the frame buffer.
foreach Pixel p in m
p.draw
endif
endfor
endfor
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
mMeshShader->SetActive();
mView = Matrix4::CreateLookAt(
Vector3::UnitZ // Up
);
mProjection = Matrix4::CreatePerspectiveFOV(
public:
~MeshComponent();
protected:
size_t mTextureIndex;
};
if (mMesh)
shader->SetMatrixUniform("uWorldTransform",
mOwner->GetWorldTransform());
Texture* t = mMesh->GetTexture(mTextureIndex);
if (t) { t->SetActive(); }
VertexArray* va = mMesh->GetVertexArray();
va->SetActive();
// Draw
glDrawElements(GL_TRIANGLES, va->GetNumIndices(),
GL_UNSIGNED_INT, nullptr);
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
// Set the basic mesh shader active
mMeshShader->SetActive();
mc->Draw(mMeshShader);
LIGHTING
So far, the mesh fragment shader directly uses the
texture color as the final color for a pixel. However,
without any contrast, the scene looks dull. To
approximate concepts such as the sun or light bulbs
or simply to add variety to the scene, you need
lighting.
Types of Lights
While there are many potential choices, a handful of
light types consistently see use in 3D games. Some
lights globally affect the entire scene, whereas other
lights affect only the area around the light.
Ambient Light
Ambient light is a uniform amount of light applied
to every single object in a scene. The amount of
ambient light might differ for different levels in a
game, depending on the time of day. A level set at
night will have a much darker and cooler ambient
light than a level set in the daytime, which will be
brighter and warmer.
Directional Light
A directional light is a light emitted from a specific
direction. Like ambient light, directional light affects
an entire scene. However, because a directional light
comes from a specific direction, it illuminates one
side of objects while leaving the other side in
darkness. An example of a directional light is the sun
on a sunny day. The direction of the light depends on
where the sun is at that time of day. The side facing
the sun is bright, while the other side is dark. Figure
6.9(b) shows a directional light at Yellowstone
National Park. (Note that in a game, shadowing is
not a property of a directional light itself. Instead,
computing shadows requires additional
calculations.)
Point Light
A point light exists at a specific point and emanates
in all directions from that point. Because it starts at a
specific point, a point light also illuminates only one
side of an object. Usually, a point light also has a
radius of influence. For example, think of a light bulb
in a dark room, as in Figure 6.10(a). There’s visible
light in the area immediately around the light, but it
slowly dissipates until it no longer adds light. The
point light doesn’t go on infinitely.
Spotlight
A spotlight is much like a point light, except that
instead of traveling in all directions, it’s focused in a
cone. To simulate a spotlight, you need all the
parameters of a point light and additionally the angle
of the cone. A classic example of a spotlight is a
theater spotlight, but another example is a flashlight
in the dark. Figure 6.10(b) illustrates a spotlight.
ka—Ambient color
kd—Diffuse color
ks—Specular color
Figure 6.12 Diagram of Phong reflection
calculations (vectors not to scale)
Implementing Lighting
This section covers how to add ambient and
directional lights to the game. Implementing this
requires changes to both the vertex and fragment
shaders. The BasicMesh.vert/.frag shaders are
a starting point for the new Phong.vert/.frag
shaders. (Remember that this shader code is in
GLSL, not C++.) You’ll then change it so all meshes
use this new Phong shader.
struct DirectionalLight
// Direction of light
vec3 mDirection;
// Diffuse color
vec3 mDiffuseColor;
// Specular color
vec3 mSpecColor;
};
invView.Invert();
shader->SetVectorUniform("uCameraPos", invView.GetTranslation());
// Ambient light
shader->SetVectorUniform("uAmbientLight", mAmbientLight);
// Directional light
shader->SetVectorUniform("uDirLight.mDirection", mDirLight.mDirection);
shader->SetVectorUniform("uDirLight.mDiffuseColor",
mDirLight.mDiffuseColor);
shader->SetVectorUniform("uDirLight.mSpecColor", mDirLight.mSpecColor);
Next, you update the gpmesh file format so that you can
specify the specular power of a mesh’s surface with the
specularPower property. You then update the
Mesh::Load code to read in this property, and set the
uSpecPower uniform in MeshComponent::Draw right
before drawing the mesh.
void main()
fragWorldPos = pos.xyz;
// Pass along the texture coordinate to frag shader
fragTexCoord = inTexCoord;
void main()
// Surface normal
vec3 N = normalize(fragNormal);
vec3 L = normalize(-uDirLight.mDirection);
// Reflection of -L about N
if (NdotL > 0)
GAME PROJECT
This chapter’s game project implements most of the
topics covered: mesh loading, a MeshComponent,
and the Phong shader. Figure 6.14 shows the final
version of this chapter’s game project. The code is
available in the book’s GitHub repository, in the
Chapter06 directory. Open Chapter06-
windows.sln on Windows and Chapter06-
mac.xcodeproj on Mac.
Figure 6.14 Chapter 6 game project
SUMMARY
This chapter covers the process of transitioning from
a 2D game world to a 3D game world. Actors now
have a transform with a 3D position and a
quaternion for rotations about an arbitrary axis.
ADDITIONAL READING
Rendering is a highly specialized area of game
programming, and excelling in rendering requires a
strong foundation in mathematics. There are many
excellent resources available. Thomas Akenine-
Moller’s book, although somewhat dated, is a
popular reference for rendering programmers—and
an updated fourth edition is forthcoming. Although
this book uses OpenGL, there are alternative
graphics APIs. For PC and Xbox, the DirectX API
dominates. Frank Luna’s book covers how to use
DirectX 11. Finally, Matt Pharr’s text is an excellent
overview of a realistic lighting technique called
physically based rendering.
EXERCISES
This chapter’s exercises involve adding
improvements to the game project. In the first
exercise you add support for different meshes
rendering with different shaders. In the second
exercise you add point lights, which provide a great
deal of flexibility in the lighting of the game.
Exercise 6.1
Modify the mesh rendering code so that it’s possible
to draw different meshes with different shaders. This
means storing the different mesh shaders in a map
and ensuring that each shader has its uniforms set
properly.
Exercise 6.2
Because a point light affects a limited radius, these
lights can add a lot to a scene. Modify the Phong
shader so that it also supports a maximum of four
point lights in the scene. Create a struct for point
lights much like the struct for directional lights. This
struct needs a position of the light, diffuse color,
specular color, specular power, and a radius of
influence. Then create an array of point lights as a
uniform. (Arrays work in GLSL just like in C/C++.)
The Phong equations are the same, except the code now
needs to consider all lights for specular and diffuse. In
addition, a point light should affect a pixel only if the
pixel is within that light’s radius. To test this, create
different point lights at different positions and with
different colors.
CHAPTER 7
AUDIO
FMOD
Designed by Firelight Technologies, FMOD
(https://fmod.com) is a popular sound engine for
video games. FMOD supports any realistic game
platform, including Windows, Mac, Linux, iOS,
Android, HTML5, and every modern console. The
current version of FMOD has two distinct
components: FMOD Studio, which is an external
authoring tool for sound designers, and the FMOD
API (application programming interface), which
integrates into games that use FMOD.
note
This chapter doesn’t cover how to use FMOD Studio, but there are excellent
references available on the official FMOD website, among other places. For
interested readers, the FMOD Studio project file used for the audio content in
this chapter is in the GitHub repository, in the FMODStudio/Chapter07
directory.
The FMOD API has two parts. The FMOD Low Level API
is the foundation for FMOD. It contains functionality to
load and play sounds, manage channels, update sounds
in a 3D environment, add digital effects to sound, and
more. It’s possible to use the Low Level API by itself, but
then any events created in FMOD Studio are not usable.
Supporting FMOD Studio requires the FMOD Studio
API, which builds on the Low Level API. However, using
the FMOD Studio API does not preclude an audio
programmer from accessing the Low Level API if needed.
For the most part, this chapter uses the FMOD Studio
API.
Installing FMOD
Because of FMOD’s licensing terms, the book’s
source code on GitHub does not include the FMOD
library and header files. Luckily, FMOD is free to
download and has very favorable licensing terms for
commercial projects. (See the FMOD site for details.)
To download the FMOD library, go to the FMOD site
(https://fmod.com) and create an account.
note
With the exception of Chapter 8, “Input Systems,” every chapter after this
one also uses the audio code from this chapter. Therefore, it’s important to
ensure that you install FMOD properly, or none of the subsequent chapters’
projects will run.
class AudioSystem
public:
~AudioSystem();
bool Initialize();
void Shutdown();
private:
FMOD::Studio::System* mSystem;
FMOD::System* mLowLevelSystem;
};
The header fmod_studio.hpp defines the FMOD
Studio API types. However, to avoid this include,
AudioSystem.h instead creates forward declarations of
the FMOD types. This way, you only need to include the
FMOD header in AudioSystem.cpp.
FMOD::Debug_Initialize(
);
note
Initializing debug logging is relevant only if you’re using the logging build of
FMOD, as is the case in this chapter. Enabling error logging is extremely
useful during development, but a shipped version of a game shouldn’t
include logging.
Next, construct an instance of an FMOD Studio system
with this code:
Click here to view code image
FMOD_RESULT result;
result = FMOD::Studio::System::create(&mSystem);
if (result != FMOD_OK)
FMOD_ErrorString(result));
return false;
result = mSystem->initialize(
512, // Max number of concurrent sounds
);
note
FMOD uses a naming convention in which member functions begin with a
lowercase letter. This is different from this book’s naming convention, which
uses an uppercase letter for the first letter of a member function.
Finally, you grab and save the Low Level system pointer
to complete initialization:
mSystem ->getLowLevelSystem(&mLowLevelSystem);
Loading/Unloading Banks
Loading a bank minimally requires calling the
loadBank function on the mSystem object.
However, this does not load the sample data and
does not give easy access to the event descriptions. It
makes sense to create a new function in
AudioSystem called LoadBank, as shown in Listing
7.2, that does a bit more than the minimum
loadBank call. Once the bank loads, you add the
bank to the mBanks map. You then load the sample
data for the bank. Then use getEventCount and
getEventList to get the list of all event
descriptions in the bank. Finally, you add each of
these event descriptions to the mEvents map so they
are easily accessible.
// Prevent double-loading
if (mBanks.find(name) != mBanks.end())
return;
);
const int maxPathLength = 512;
if (result == FMOD_OK)
mBanks.emplace(name, bank);
bank ->loadSampleData();
int numEvents = 0;
bank ->getEventCount(&numEvents);
if (numEvents > 0)
std::vector<FMOD::Studio::EventDescription*> events(numEvents);
char eventName[maxPathLength];
FMOD::Studio::EventDescription* e = events[i];
mEvents.emplace(eventName, e);
LoadBank("Assets/Master Bank.strings.bank");
LoadBank("Assets/Master Bank.bank");
Note how the code loads the master strings bank first.
The master strings bank is a special bank that contains
the human-readable names of all events and other data
in the FMOD Studio project. If you don’t load this bank,
the names are inaccessible in code. Without the names,
the code needs to use GUIDs (globally unique IDs) to
access all the FMOD Studio data. This means that,
technically, loading the master strings bank is optional,
but loading the strings makes the AudioSystem easier
to implement.
{
// Make sure event exists
if (iter != mEvents.end())
if (event)
event ->start();
event ->release();
std::unordered_map<unsigned int,
FMOD::Studio::EventInstance*> mEventInstances;
if (iter != mEvents.end())
iter->second->createInstance(&event);
if (event)
event->start();
sNextID++;
retID = sNextID;
mEventInstances.emplace(retID, event);
FMOD::Studio::EventInstance* e = iter.second;
FMOD_STUDIO_PLAYBACK_STATE state;
e->getPlaybackState(&state);
if (state == FMOD_STUDIO_PLAYBACK_STOPPED)
{
e->release();
done.emplace_back(iter.first);
mEventInstances.erase(id);
// Update FMOD
mSystem->update();
class SoundEvent
public:
SoundEvent();
bool IsValid();
void Restart();
// Stop this event
// Setters
// Getters
protected:
private:
mSystem->GetEventInstance(mID) : nullptr;
if (event)
event->setPaused(pause);
Note how the code validates that both the mSystem and
event pointer are non-null. This ensures that even if the
ID is not in the map, the function will not crash.
Similarly, the SoundEvent::IsValid function returns
true only if mSystem is non-null and the ID is in the
event instance map in AudioSystem.
mMusicEvent.SetPaused(!mMusicEvent.GetPaused());
3D POSITIONAL AUDIO
For 3D games, most sound effects are positional.
This means that an object in the world, such as a
fireplace, emits a sound. The game has a listener,
or a virtual microphone, that picks up this sound. For
example, if the listener faces the fireplace, it should
sound like the fireplace is in front. Similarly, if the
listener has his or her back to the fireplace, the
fireplace should sound like it’s behind.
FMOD_VECTOR v;
v.x = in.y;
v.y = in.z;
v.z = in.x;
return v;
invView.Invert();
FMOD_3D_ATTRIBUTES listener;
listener.position = VecToFMOD(invView.GetTranslation());
listener.forward = VecToFMOD(invView.GetZAxis());
// In the inverted view, second row is up
listener.up = VecToFMOD(invView.GetYAxis());
mSystem->setListenerAttributes(0, &listener);
if (event)
FMOD::Studio::EventDescription* ed = nullptr;
event->getDescription(&ed);
if (ed)
return retVal;
if (event)
FMOD_3D_ATTRIBUTES attr;
attr.position = VecToFMOD(worldTrans.GetTranslation());
attr.forward = VecToFMOD(worldTrans.GetXAxis());
// Third row is up
attr.up = VecToFMOD(worldTrans.GetZAxis());
event->set3DAttributes(&attr);
}
~AudioComponent();
void StopAllEvents();
private:
std::vector<SoundEvent> mEvents2D;
std::vector<SoundEvent> mEvents3D;
};
SoundEvent e = mOwner->GetGame()->GetAudioSystem()->PlayEvent(name);
// Is this 2D or 3D?
if (e.Is3D())
mEvents3D.emplace_back(e);
// Set initial 3D attributes
e.Set3DAttributes(mOwner->GetWorldTransform());
else
mEvents2D.emplace_back(e);
return e;
void AudioComponent::OnUpdateWorldTransform()
{
if (event.IsValid())
event.Set3DAttributes(world);
mLowLevelSystem->set3DSettings(
);
Buses
In FMOD Studio, a bus is a grouping of sounds. For
example, you might have a bus for sound effects, a
bus for music, and a bus for dialogue. Each bus can
individually have different DSP effects attached to it,
and at runtime you can adjust buses. For instance,
many games offer separate volume sliders for
different categories of sound. This is straightforward
to implement with buses.
if (iter != mBuses.end())
iter->second->setVolume(volume);
}
Snapshots
In FMOD, snapshots are special types of events
that control buses. Because they’re just events, they
use the same event interface that already exists, and
the existing PlayEvent function works with them.
The only difference is that their paths begin with
snapshot:/ instead of event:/.
Occlusion
Imagine living in a small apartment when there’s a
party next door. The music at the party is very loud
and travels through your wall. You’ve heard the song
before, but it sounds different when listening
through the wall. The bass is more dominant, and it’s
tough to hear the high-frequency parts. This is
sound occlusion, as illustrated in Figure 7.5(a).
result = mSystem->initialize(
);
mSystem->flushCommands();
FMOD::ChannelGroup* cg = nullptr;
event->getChannelGroup(&cg);
cg->set3DOcclusion(occFactor, occFactor);
GAME PROJECT
This chapter’s game project demonstrates most of
the audio features covered in this chapter. The code
is available in the book’s GitHub repository, in the
Chapter07 directory. Open Chapter07-
windows.sln on Windows and Chapter07-
mac.xcodeproj on Mac. The FMOD Studio project
corresponding to this chapter’s content is in
FMODStudio/Chapter07.
SUMMARY
Most games require audio systems that go beyond
simply playing sound files. Using the FMOD API, this
chapter shows how to implement a production-
quality sound system into the game. The audio
system loads in banks and plays back events. The
SoundEvent class tracks outstanding event
instances and allows manipulation of these
instances.
EXERCISES
This chapter’s exercises build on the audio features
implemented in the chapter. In the first exercise you
add support for the Doppler effect, while in the
second exercise you implement virtual positions for a
third-person listener.
Exercise 7.1
Adjust the listener and event instance attribute code
so that it correctly sets the velocity parameters. Then
make the sphere actor (created in
Game::LoadData) move quickly back and forth to
test the Doppler effect. Use set3DSettings to
adjust the intensity of the effect as needed. The
Doppler effect should be perceptible for the fire loop
audio sound once it’s working correctly.
Exercise 7.2
Implement virtual positions for event instances as
per the third-person listener formulas in this
chapter. Replace the CameraActor class in the
Chapter 7 game project with the CameraActor class
in Exercise/7.2 on GitHub. This version of the
CameraActor implements a basic third-person
camera for testing purposes.
CHAPTER 8
INPUT SYSTEMS
Polling
Earlier in this book, you used the
SDL_GetKeyboardState function to get the
Boolean state of every key on the keyboard. With the
additions in Chapter 3, “Vectors and Basic Physics,”
you then passed this keyboard state to every actor’s
ProcessInput function, which in turn passes it to
every component’s ProcessInput function. Then,
in these functions you can query the state of a
specific key to decide whether to perform an action,
such as moving the player character forward when
pressing the W key. Because you’re checking the value
of a specific key on every frame, this approach is
considered polling the state of the key.
if (spacebar == 1)
character.jump()
For the sample input in Figure 8.1, this code would call
the character.jump() function twice: once on frame
4 and once on frame 5. And if the player held the button
for 10 frames instead of 2, then you’d call
character.jump() 10 times. Clearly, you don’t want
the character to jump every frame when the spacebar
value is 1. Instead, you should only call
character.jump() on the frame where the spacebar
has a positive edge. For the input graph in Figure 8.1,
this is on frame 4. This way, for every press of the
spacebar, regardless of how long the player holds the
spacebar, the character jumps only once. In this case,
you want pseudocode like this:
Click here to view code image
character.jump()
character.jump()
spacebarLast = spacebar
0 0 None
0 1 Pressed
1 0 Released
1 1 Held
Consider how you might use this for a game where the
player can hold a key to charge up an attack. On the
frame on which you detect the Pressed state of the key,
you begin charging the attack. Then as long as the key’s
state on subsequent frames remains Held, you continue
to charge the attack. Finally, when the key’s state
becomes Released, it means the player let go of the key,
and you can now execute the attack with the appropriate
charge level.
Events
Recall from Chapter 1, “Game Programming
Overview,” that SDL generates different events that
the program can optionally respond to. Currently,
you respond to the SDL_Quit event, which occurs
when the player tries to close the window.
Game::ProcessInput checks every frame if there
are events in the queue and can selectively choose to
respond to them.
enum ButtonState
ENone,
EPressed,
EReleased,
EHeld
};
struct InputState
KeyboardState Keyboard;
};
class InputSystem
public:
bool Initialize();
void Shutdown();
void PrepareForUpdate();
void Update();
const InputState& GetState() const { return mState; }
private:
InputState mState;
};
void Game::ProcessInput()
mInputSystem->PrepareForUpdate();
// SDL_PollEvent loop...
mInputSystem->Update();
KEYBOARD INPUT
Recall that the SDL_GetKeyboardState function
returns a pointer to the keyboard state. Notably, the
return value of SDL_GetKeyboardState does not
change throughout the lifetime of the application, as
it points to internal SDL data. Therefore, to track the
current state of the keyboard, you merely need a
single pointer that you initialize once. However,
because SDL overwrites the current keyboard state
when you call SDL_PollEvents, you need a
separate array to save the previous frame state.
class KeyboardState
{
public:
private:
// Current state
Uint8 mPrevState[SDL_NUM_SCANCODES];
};
struct InputState
{
KeyboardState Keyboard;
};
// (In InputSystem::Initialize...)
mState.Keyboard.mCurrState = SDL_GetKeyboardState(NULL);
memset(mState.Keyboard.mPrevState, 0,
SDL_NUM_SCANCODES);
// (In InputSystem::PrepareForUpdate...)
memcpy(mState.Keyboard.mPrevState,
mState.Keyboard.mCurrState,
SDL_NUM_SCANCODES);
if (mPrevState[keyCode] == 0)
if (mCurrState[keyCode] == 0)
{ return ENone; }
else
{ return EPressed; }
if (mCurrState[keyCode] == 0)
{ return EReleased; }
else
{ return EHeld; }
if (state.Keyboard.GetKeyValue(SDL_SCANCODE_SPACE))
if (state.Keyboard.GetKeyState(SDL_SCANCODE_ESCAPE)
== EReleased)
mIsRunning = false;
MOUSE INPUT
For mouse input, there are three main types of input
to focus on: button input, movement of the mouse,
and movement of the scroll wheel. The button input
code is like the keyboard code except that the
number of buttons is significantly smaller. The
movement input is a little more complex because
there are two modes of input (absolute and relative).
Ultimately, you can still poll the mouse input with a
single function call per frame. However, for the scroll
wheel, SDL only reports the data via an event, so you
must add some code to InputSystem to also
process certain SDL events.
SDL_ShowCursor(SDL_FALSE);
int x = 0, y = 0;
Uint32 buttons = SDL_GetMouseState(&x, &y);
note
For the position of the mouse, SDL uses the SDL 2D coordinate system. This
means that the top-left corner is (0, 0), positive x is to the right, and positive y
is down. However, you can easily convert these coordinates to whichever
other system you prefer.
For example, to convert to the simple view-projection coordinate system from
Chapter 5, “OpenGL,” you can use the following two lines of code:
x = x - screenWidth/2;
y = screenHeight/2 - y;
Left SDL_BUTTON_LEFT
Right SDL_BUTTON_RIGHT
Middle SDL_BUTTON_MIDDLE
class MouseState
public:
// For buttons
private:
Vector2 mMousePos;
Uint32 mCurrButtons;
Uint32 mPrevButtons;
};
mState.Mouse.mPrevButtons = mState.Mouse.mCurrButtons;
In Update, you call SDL_GetMouseState to update all
the MouseState members:
int x = 0, y = 0;
mState.Mouse.mMousePos.x = static_cast<float>(x);
mState.Mouse.mMousePos.y = static_cast<float>(y);
if (state.Mouse.GetButtonState(SDL_BUTTON_LEFT) == EPressed)
Relative Motion
SDL supports two different modes for detecting
mouse movement. In the default mode, SDL reports
the current coordinates of the mouse. However,
sometimes you instead want to know the relative
change of the mouse between frames. For example,
in many first-person games on PC, you can use the
mouse to rotate the camera. The speed of the
camera’s rotation depends on how fast the player
moves the mouse. In this case, exact coordinates of
the mouse aren’t useful, but the relative movement
between frames is.
SDL_SetRelativeMouseMode(SDL_TRUE);
SDL_SetRelativeMouseMode(set);
mState.Mouse.mIsRelative = value;
int x = 0, y = 0;
if (mState.Mouse.mIsRelative)
else
{
mState.Mouse.mCurrButtons = SDL_GetMouseState(&x, &y);
mState.Mouse.mMousePos.x = static_cast<float>(x);
mState.Mouse.mMousePos.y = static_cast<float>(y);
With this code, you can now enable relative mouse mode
and access the relative mouse position via MouseState.
Scroll Wheel
For the scroll wheel, SDL does not provide a function
to poll the current state of the wheel. Instead, SDL
generates the SDL_MOUSEWHEEL event. To support
this in the input system, then, you must first add
support for passing SDL events to InputSystem.
You can do this via a ProcessEvent function, and
then you update the event polling loop in
Game::ProcessInput to pass the mouse wheel
event to the input system:
Click here to view code image
SDL_Event event;
while (SDL_PollEvent(&event))
switch (event.type)
{
case SDL_MOUSEWHEEL:
mInputSystem->ProcessEvent(event);
break;
Vector2 mScrollWheel;
switch (event.type)
case SDL_MOUSEWHEEL:
mState.Mouse.mScrollWheel = Vector2(
static_cast<float>(event.wheel.x),
static_cast<float>(event.wheel.y));
break;
default:
break;
mState.Mouse.mScrollWheel = Vector2::Zero;
With this code, you can now access the scroll wheel state
every frame with the following:
Click here to view code image
CONTROLLER INPUT
For numerous reasons, detecting controller input in
SDL is more complex than for the keyboard and
mouse. First, a controller has a much greater variety
of sensors than a keyboard or mouse. For example, a
standard Microsoft Xbox controller has two analog
joysticks, a directional pad, four standard face
buttons, three special face buttons, two bumper
buttons, and two triggers—which is a lot of different
sensors to get data from.
note
Depending on the controller and your platform, you may need to first install a
driver for your controller in order for SDL to detect it.
Before you can use a controller, you must first initialize
the SDL subsystem that handles controllers. To enable it,
simply add the SDL_INIT_GAMECONTROLLER flag to the
SDL_Init call in Game::Initialize:
mController = SDL_GameControllerOpen(0);
tip
By default, SDL supports a handful of common controllers, such as the
Microsoft Xbox controller. You can find controller mappings that specify the
button layouts of many other controllers. The
SDL_GameControllerAddMappingsFromFile function can load
controller mappings from a supplied file. A community-maintained mapping
file is available on GitHub at
https://github.com/gabomdq/SDL_GameControllerDB.
Buttons
Game controllers in SDL support many different
buttons. SDL uses a naming convention that mirrors
the button names of a Microsoft Xbox controller. For
example, the names of the face buttons are A, B, X,
and Y. Table 8.3 lists the different button constants
defined by SDL, where * is a wildcard that denotes
multiple possible values.
Button Constant
Start SDL_CONTROLLER_START
Note that the left and right stick buttons are for when the
user physically presses in the left/right stick. Some
games use pressing in the right stick for sprinting, for
example.
class ControllerState
public:
// For buttons
const;
private:
// Current/previous buttons
Uint8 mCurrButtons[SDL_CONTROLLER_BUTTON_MAX];
Uint8 mPrevButtons[SDL_CONTROLLER_BUTTON_MAX];
bool mIsConnected;
};
ControllerState Controller;
memset(mState.Controller.mCurrButtons, 0,
SDL_CONTROLLER_BUTTON_MAX);
memset(mState.Controller.mPrevButtons, 0,
SDL_CONTROLLER_BUTTON_MAX);
memcpy(mState.Controller.mPrevButtons,
mState.Controller.mCurrButtons,
SDL_CONTROLLER_BUTTON_MAX);
mState.Controller.mCurrButtons[i] =
SDL_GameControllerGetButton(mController,
SDL_GameControllerButton(i));
With this code, you can then query the state of a specific
game controller button, using a pattern like the keyboard
and mouse buttons. For example, this code checks if the
A button on the controller has a positive edge this frame:
Click here to view code image
if (state.Controller.GetButtonState(SDL_CONTROLLER_BUTTON_A) == EPressed)
Button Constant
(maxValue - deadZone);
return retVal;
float mLeftTrigger;
float mRightTrigger;
mState.Controller.mLeftTrigger =
Filter1D(SDL_GameControllerGetAxis(mController,
SDL_CONTROLLER_AXIS_TRIGGERLEFT));
Vector2 dir;
dir.x = static_cast<float>(inputX);
dir.y = static_cast<float>(inputY);
dir = Vector2::Zero;
else
{
// Calculate fractional value between
// fractional value
dir *= f / length;
return dir;
x = SDL_GameControllerGetAxis(mController,
SDL_CONTROLLER_AXIS_LEFTX);
y = -SDL_GameControllerGetAxis(mController,
SDL_CONTROLLER_AXIS_LEFTY);
mState.Controller.mLeftStick = Filter2D(x, y);
You can then access the value of the left stick via
InputState with code like this:
{
// Is this joystick a controller?
if (SDL_IsGameController(i))
{
INPUT MAPPINGS
The way you currently use the data from
InputState, the code assumes that specific input
devices and keys map directly to actions. For
example, if you want the player character to jump on
the positive edge of a spacebar, you add code like this
to ProcessInput:
== Pressed;
GAME PROJECT
This chapter’s game project adds a full
implementation of the InputSystem from this
chapter to the game project from Chapter 5. This
includes all the code for the keyboard, mouse, and
controller. Recall that the Chapter 5 project uses 2D
movement (so position is a Vector2). The code is
available in the book’s GitHub repository, in the
Chapter08 directory. Open Chapter08-
windows.sln on Windows and Chapter08-
mac.xcodeproj on Mac.
if (state.Controller.GetIsConnected())
mVelocityDir = state.Controller.GetLeftStick();
if (!Math::NearZero(state.Controller.GetRightStick().Length()))
mRotationDir = state.Controller.GetRightStick();
You add the NearZero check for the right stick to make
sure that if the player releases the right stick completely,
the ship doesn’t snap back to an initial angle of zero.
Note that this code reduces the speed based on how far
you move the left stick in a direction because
mVelocityDir can have a length less than one in this
case.
SetRotation(angle);
Figure 8.4 shows what the game looks like with the ship
moving around.
Figure 8.4 Ship moving around in the Chapter 8
game project
SUMMARY
Many different input devices are used for games. A
device might report either a single Boolean value or a
range of inputs. For a key/button that reports a
simple on/off state, it’s useful to consider the
difference between the value in this frame and the
value in the last frame. This way, you can detect the
positive or negative edge of the input, corresponding
to a “pressed” or “released” state.
ADDITIONAL READING
Bruce Dawson covers how to record input and then
play it back, which is very useful for testing. The
Oculus SDK documentation covers how to interface
with Oculus VR touch controllers. Finally, Mick West
explores how to measure input lag, which is the
amount of time it takes a game to detect inputs from
controllers. Input lag is generally not the fault of the
input code, but West’s material is interesting
nonetheless.
EXERCISES
In this chapter’s exercises you will improve the input
system. In the first exercise you add support for
multiple controllers. In the second exercise you add
input mappings.
Exercise 8.1
Recall that to support multiple controllers, you need
to have multiple ControllerState instances in
the InputState struct. Add code to support a
maximum of four controllers simultaneously. On
initialization, change the code to detect any
connected controllers and enable them individually.
Then change the Update code so that it updates up
to all four controllers instead of just a single one.
Fire,Controller,A
CAMERAS
if (!Math::NearZero(mForwardSpeed) || !Math::NearZero(mStrafeSpeed))
mOwner->SetPosition(pos);
int x, y;
if (x != 0)
// Multiply by rotation/sec
angularSpeed *= maxAngularSpeed;
}
mMoveComp->SetAngularSpeed(angularSpeed);
game->GetRenderer()->SetViewMatrix(view);
game->GetAudioSystem()->SetListener(view);
// Up is just unit z
Vector3 up = Vector3::UnitZ;
SetViewMatrix(view);
}
Adding Pitch
Recall from Chapter 6 that yaw is rotation about the
up axis and pitch is rotation about the side axis (in
this case, the right axis). Incorporating pitch into the
FPS camera requires a few changes. The camera still
starts with the forward vector from the owner, but
you apply an additional rotation to account for the
pitch. Then, you derive a target from this view
forward. To implement this, you add three new
member variables to FPSCamera:
float mPitchSpeed;
float mMaxPitch;
// Current pitch
float mPitch;
{
// Call parent update (doesn't do anything right now)
CameraComponent::Update(deltaTime);
mOwner->GetForward(), q);
SetViewMatrix(view);
First-Person Model
Although it’s not strictly part of the camera, most
first-person games also incorporate a first-person
model. This model may have parts of an animated
character, such as arms, feet, and so on. If the player
carries a weapon, then when the player pitches up,
the weapon appears to also aim up. You want the
weapon model to pitch up even though the player
character remains flat with the ground.
modelPos.z += modelOffset.z;
mFPSModel->SetPosition(modelPos);
Quaternion q = GetRotation();
q = Quaternion::Concatenate(q,
Quaternion(GetRight(), mCameraComp->GetPitch()));
mFPSModel->SetRotation(q);
FOLLOW CAMERA
A follow camera is a camera that follows behind a
target object. This type of camera is popular in many
games, including racing games where the camera
follows behind a car and third-person
action/adventure games such as Horizon Zero
Dawn. Because follow cameras see use in many
different types of games, there is a great deal of
variety in their implementation. This section focuses
on a follow camera tracking a car.
return cameraPos;
CameraComponent::Update(deltaTime);
mOwner->GetForward() * mTargetDist;
// (Up is just UnitZ since we don't flip the camera)
Vector3::UnitZ);
SetViewMatrix(view);
Adding a Spring
Rather than having the camera position instantly
changing to the position as per the equation, you can
have the camera adjust to this position over the
course of several frames. To accomplish this, you can
separate the camera position into an “ideal” camera
position and an “actual” camera position. The ideal
camera position is the position derived from the
basic follow camera equations, while the actual
camera position is what the view matrix uses.
CameraComponent::Update(deltaTime);
dampening * mVelocity;
// Update velocity
mOwner->GetForward() * mTargetDist;
Vector3::UnitZ);
SetViewMatrix(view);
void FollowCamera::SnapToIdeal()
mActualPos = ComputeCameraPos();
// Zero velocity
mVelocity = Vector3::Zero;
mOwner->GetForward() * mTargetDist;
Vector3::UnitZ);
SetViewMatrix(view);
}
ORBIT CAMERA
An orbit camera focuses on a target object and
orbits around it. This type of camera might be used
in a builder game such as Planet Coaster, as it allows
the player to easily see the area around an object.
The simplest implementation of an orbit camera
stores the camera’s position as an offset from the
target rather than as an absolute world space
position. This takes advantage of the fact that
rotations always rotate about the origin. So, if the
camera position is an offset from the target object,
any rotations are effectively about the target object.
Vector3 mOffset;
// Up vector of camera
Vector3 mUp;
float mPitchSpeed;
float mYawSpeed;
CameraComponent::Update(deltaTime);
// = -offset
forward.Normalize();
right.Normalize();
// Create quaternion for pitch about camera right
SetViewMatrix(view);
SPLINE CAMERA
A spline is a mathematical representation of a curve
specified by a series of points on the curve. Splines
are popular in games because they enable an object
to smoothly move along a curve over some period.
This can be very useful for a cutscene camera
because the camera can follow a predefined spline
path. This type of camera also sees use in games like
God of War, where the camera follows along a set
path as the player progresses through the world.
struct Spline
// of points in segment)
std::vector<Vector3> mControlPoints;
};
{ return mControlPoints.back(); }
else if (startIdx == 0)
{ return mControlPoints[startIdx]; }
// Get p0 through p3
Vector3 p1 = mControlPoints[startIdx];
return position;
size_t mIndex;
float mT;
// Amount t changes/sec
float mSpeed;
bool mPaused;
CameraComponent::Update(deltaTime);
// Update t value
if (!mPaused)
mT += mSpeed * deltaTime;
mIndex++;
mT = mT - 1.0f;
else
mPaused = true;
SetViewMatrix(view);
UNPROJECTION
Given a point in world space, to transform it into clip
space, you first multiply by the view matrix followed
by the projection matrix. Imagine that the player in a
first-person shooter wants to fire a projectile based
on the screen position of the aiming reticule. In this
case, the aiming reticule position is a coordinate in
screen space, but to correctly fire the projectile, you
need a position in world space. An unprojection is
a calculation that takes in a screen space coordinate
and converts it into a world space coordinate.
unprojection.Invert();
outStart = Unproject(screenPoint);
// Get end point (in center of screen, between near and far)
screenPoint.z = 0.9f;
outDir.Normalize();
GAME PROJECT
This chapter’s game project demonstrates all the
different cameras discussed in the chapter, as well as
the unprojection code. The code is available in the
book’s GitHub repository, in the Chapter09
directory. Open Chapter09-windows.sln on
Windows and Chapter09-mac.xcodeproj on
Mac.
Follow—Use W/S to move forward and back and use A/D to rotate
(yaw)
Orbit camera mode—Hold down the right mouse button and move
the mouse to rotate
ADDITIONAL READING
There are not many books dedicated to the topic of
game cameras. However, Mark Haigh-Hutchinson,
the primary programmer for the Metroid Prime
camera system, provides an overview of many
different techniques relevant for game cameras.
EXERCISES
In this chapter’s exercises, you will add features to
some of the cameras. In the first exercise, you add
mouse support to the follow camera, and in the
second exercise, you add features to the spline
camera.
Exercise 9.1
Many follow cameras have support for user-
controlled rotation of the camera. For this exercise,
add code to the follow camera implementation that
allows the user to rotate the camera. When the player
holds down the right mouse button, apply an
additional pitch and yaw rotation to the camera.
When the player releases the right mouse button, set
the pitch/yaw rotation back to zero.
The code for the rotation is like the rotation code for the
orbit camera. Furthermore, as with the orbit camera, the
code can no longer assume that the z-axis is up. When
the player releases the mouse button, the camera won’t
immediately snap back to the original orientation
because of the spring. However, this is aesthetically
pleasing, so there’s no reason to change this behavior!
Exercise 9.2
Currently, the spline camera goes in only one
direction on the path and stops upon reaching the
end. Modify the code so that when the spline hits the
end of the path, it starts moving backward.
CHAPTER 10
COLLISION DETECTION
Line Segments
A line segment comprises start and end points:
struct LineSegment
Vector3 mStart;
Vector3 mEnd;
};
// Construct vectors
return ac.LengthSq();
return bc.LengthSq();
else
// Compute p
/ Vector3::Dot(ab, ab);
Vector3 p = scalar * ab;
Planes
A plane is a flat, two-dimensional surface that
extends infinitely, much as a line is a one-
dimensional object that extends infinitely. In a game,
you may use a plane as an abstraction for the ground
or walls. The equation of a plane is as follows:
where P an arbitrary point on the plane, n is the normal
to the plane, and d is the signed minimal distance
between the plane and the origin.
struct Plane
Vector3 mNormal;
float mD;
};
Vector3 ab = b - a;
Vector3 ac = c - a;
mNormal.Normalize();
// d = -P dot n
mD = -Vector3::Dot(a, mNormal);
Bounding Volumes
Modern 3D games have characters and objects
drawn with thousands of triangles. When
determining whether two objects collide, it’s not
efficient to test all the triangles comprising the
object. For this reason, games use simplified
bounding volumes, such as boxes or spheres.
When deciding whether two objects intersect, the
game uses the simplified collision for calculations.
This yields greatly improved efficiency.
Spheres
The simplest representation of the bounds of a 3D
object is a sphere. The definition of a sphere only
requires the position of the center of the sphere and a
radius:
struct Sphere
{
Vector3 mCenter;
float mRadius;
};
struct AABB
Vector3 mMin;
Vector3 mMax;
};
box.UpdateMinMax(points[i]);
points[7] = Vector3(mMax);
mMin = p;
mMax = p;
p = Vector3::Transform(points[i], q);
UpdateMinMax(p);
struct OBB
Vector3 mCenter;
Quaternion mRotation;
Vector3 mExtents;
};
Figure 10.5 An oriented bounding box for a
humanoid character that’s rotated
Capsules
A capsule is a line segment with a radius:
struct Capsule
LineSegment mSegment;
float mRadius;
};
Convex Polygons
Sometimes, a game may need bounds for an object
that are more accurate than any of the basic shapes.
For a 2D game, the object might have bounds
represented as a convex polygon. Recall that a
polygon is convex if all its interior angles are less
than 180°.
struct ConvexPolygon
{
std::vector<Vector2> mVertices;
};
INTERSECTION TESTS
Once the game is using geometric types to represent
game objects, the next step is to test for intersections
between these objects. This section looks at a series
of useful tests. First, it explores whether an object
contains a point. Then, it looks at intersections
between different types of bounding volumes. Next,
it looks at intersections between a line segment and
other objects. Finally, this section covers how to
handle dynamically moving objects.
}
AABB Contains Point Tests
Given a 2D axis-aligned box, a point is outside the
box if any of the following cases are true: The point is
to the left of the box, the point is to the right of the
box, the point is above the box, or the point is below
the box. If none of these cases are true, then the box
must contain the point.
return !outside;
Vector2 a, b;
a = mVertices[i] - point;
a.Normalize();
b = mVertices[i + 1] - point;
b.Normalize();
a = mVertices.back() - point;
a.Normalize();
b = mVertices.front() - point;
b.Normalize();
return !no;
return dx * dx + dy * dy + dz * dz;
}
Once you have the MinDistSq function, you can
implement sphere versus AABB intersection. You find
the minimal distance squared between the center of the
sphere and the AABB. If it is less than or equal to the
radius squared, then the sphere and AABB intersect:
Click here to view code image
b.mSegment);
float sumRadii = a.mRadius + b.mRadius;
p.mNormal);
if (Math::NearZero(denom))
{
// The only way they intersect if start/end are
outT = 0.0f;
return true;
else
{ return false; }
else
return true;
else
{
return false;
// Compute discriminant
return false;
else
disc = Math::Sqrt(disc);
outT = tMin;
return true;
outT = tMax;
return true;
else
return false;
For the segment tests versus each plane, recall that the
equation for line segment versus plane intersection is as
follows:
Because the min point for the box is on its left plane, the
d value is as follows:
std::vector<float>& out)
if (Math::NearZero(denom))
return false;
else
out.emplace_back(t);
return true;
else
{
return false;
std::vector<float> tValues;
std::sort(tValues.begin(), tValues.end());
Vector3 point;
point = l.PointOnSegment(t);
if (b.Contains(point))
outT = t;
return true;
return false;
Dynamic Objects
The intersection tests covered thus far are
instantaneous tests. In a game, this means that
you test whether two objects intersect on the current
frame. Although this might be sufficient for simple
games, in practice there are issues.
Consider the case where a character fires a bullet at piece
of paper. Suppose you use a bounding sphere for the
bullet and a box for the paper. On each frame, you test
whether the bullet intersects with the paper. Because the
bullet travels quickly, it’s unlikely that there’s one
specific frame where the bullet exactly intersects with the
paper. This means that instantaneous tests will miss the
intersection, as in Figure 10.12.
For each sphere, you have the center positions during the
last frame and during this frame. You can represent
these positions using the same parametric equation as
for line segments, where the position last frame is t = 0
and the position this frame is t = 1. For sphere P, P0 is
the position last frame and P1 is the position this frame.
Similarly, the sphere Q has the positions Q0 and Q1. So,
these are the parametric equations for the positions of
spheres P and Q:
// Compute X, Y, a, b, and c
// Solve discriminant
return false;
else
disc = Math::Sqrt(disc);
return true;
}
else
return false;
public:
~BoxComponent();
private:
AABB mObjectBox;
AABB mWorldBox;
bool mShouldRotate;
};
bc->SetObjectBox(mesh->GetBox());
Listing 10.11
BoxComponent::OnUpdateWorldTransform
Implementation
Click here to view code image
void BoxComponent::OnUpdateWorldTransform()
mWorldBox = mObjectBox;
// Scale
mWorldBox.mMin *= mOwner->GetScale();
mWorldBox.mMax *= mOwner->GetScale();
// Rotate
if (mShouldRotate)
{
mWorldBox.Rotate(mOwner->GetRotation());
// Translate
mWorldBox.mMin += mOwner->GetPosition();
mWorldBox.mMax += mOwner->GetPosition();
class PhysWorld
public:
// ...
private:
};
struct CollisionInfo
// Point of collision
Vector3 mPoint;
// Normal at collision
Vector3 mNormal;
};
Vector3 norm;
float t;
// Does the segment intersect with the box?
if (t < closestT)
outColl.mPoint = l.PointOnSegment(t);
outColl.mNormal = norm;
outColl.mBox = box;
outColl.mActor = box->GetOwner();
collided = true;
return collided;
{ SetRotation(Quaternion::Identity); }
else
axis.Normalize();
SetRotation(Quaternion(axis, angle));
PhysWorld::CollisionInfo info;
if (phys->SegmentCast(ls, info))
mOwner->RotateToNewForward(dir);
MoveComponent::Update(deltaTime);
}
BoxComponent* a = mBoxes[i];
BoxComponent* b = mBoxes[j];
if (Intersect(a->GetWorldBox(), b->GetWorldBox()))
f(a->GetOwner(), b->GetOwner());
// Sort by min.x
std::sort(mBoxes.begin(), mBoxes.end(),
[](BoxComponent* a, BoxComponent* b) {
b->GetWorldBox().mMin.x;
});
BoxComponent* a = mBoxes[i];
BoxComponent* b = mBoxes[j];
// against box[i]
{
break;
f(a->GetOwner(), b->GetOwner());
The basic idea is that every frame, you test the player’s
collision against every PlaneActor. If the AABBs
intersect, you adjust the player’s position so that it no
longer collides with the wall. To understand this
calculation, it helps to visualize the problem in 2D.
In 3D, the principle is the same, except there are now six
difference values because there are three axes. The
FPSActor::FixCollisions function as shown in
Listing 10.18 implements this minimum overlap test.
Importantly, because changing the position of the player
changes the player’s BoxComponent, in between each
intersect we must recompute the world bounds of the
BoxComponent. You then call this function from
UpdateActor, which means you call it after the
MoveComponent updates the player’s position every
frame.
void FPSActor::FixCollisions()
ComputeWorldTransform();
if (Intersect(playerBox, planeBox))
// Ditto for dy
// Ditto for dz
pos.x += dx;
}
else if (Math::Abs(dy) <= Math::Abs(dx) &&
pos.y += dy;
else
pos.z += dz;
SetPosition(pos);
mBoxComp->OnUpdateWorldTransform();
SUMMARY
This chapter provides an in-depth introduction to
collision detection techniques in games. Games
might use many different geometric types for
collision detection. A line segment has a start point
and an end point. The representation of a plane is its
normal and distance to the origin. Spheres are
simple bounding volumes but may cause many false
negatives for characters of different shapes. Axis-
aligned bounding boxes have sides aligned with the
axes, while oriented bounding boxes do not have this
restriction.
ADDITIONAL READING
Christer Ericson provides extremely detailed
coverage of collision detection, covering both the
mathematical bases of the algorithms and usable
implementations. Ian Millington doesn’t have as
much coverage of collision detection algorithms but
explains how to incorporate collision in the context
of physics engine movement, which is something this
chapter does not discuss in detail.
Ericson, Christer. Real-time Collision
Detection. San Francisco: Morgan
Kaufmann, 2005.
Millington, Ian. Game Physics Engine
Development, 2nd edition. Boca Raton:
CRC Press, 2010.
EXERCISES
In this chapter’s first exercise, you add jumping to
the chapter’s game project. In the second exercise,
you improve upon the sweep-and-prune
implementation covered in this chapter. In the last
exercise you implement OBB versus OBB
intersection between oriented bounding boxes.
Exercise 10.1
Add jumping to the player character. The ground
objects already have corresponding axis-aligned
bounding boxes. To implement jumping, select a key
(such as the spacebar). When the player presses the
jump key, set an additional velocity in the positive z
direction. Similarly, add a negative z acceleration for
gravity that slows down the jump velocity. After the
player hits the apex of the jump, he or she starts
falling. While they player is falling, you can detect in
FixCollisions whether the player lands on top of
a PlaneActor (because you know that the top is
dz2). While the player is on the ground, disable
gravity and set the z velocity back to zero.
Exercise 10.2
Change the SweepAndPrune function to sweep and
prune across all three coordinate axes. Have
PhysWorld maintain three vectors of boxes and
change it so that AddBox and RemoveBox touch all
three vectors. Then sort each vector by its
corresponding axis.
Exercise 10.3
Implement OBB versus OBB intersection in a new
Intersect function. As with AABBs, use the
separating axis approach (that is, figure out whether
they cannot intersect and then logically not the
result). However, whereas there are 3 axes to test for
AABBs, for OBBs there are a total of 15 different axes
to test.
USER INTERFACES
public:
Font();
~Font();
void Unload();
private:
};
std::vector<int> fontSizes = {
8, 9, 10, 11, 12, 14, 16, 18, 20, 22, 24, 26, 28,
30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 52, 56,
};
if (font == nullptr)
return false;
mFontData.emplace(size, font);
return true;
// Convert to SDL_Color
SDL_Color sdlColor;
sdlColor.a = 255;
if (iter != mFontData.end())
sdlColor);
if (surf != nullptr)
texture->CreateFromSurface(surf);
SDL_FreeSurface(surf);
else
{
SDL_Log("Point size %d is unsupported", pointSize);
return texture;
UI SCREENS
Because a UI system might be used for many things,
including the HUD and menus, flexibility is an
important feature. Although there are data-driven
systems that utilize tools such as Adobe Flash, this
chapter instead focuses on a code-driven
implementation. However, many of the ideas
presented here can still apply to a more data-driven
system.
class UIScreen
public:
virtual ~UIScreen();
void Close();
protected:
Vector2 mTitlePos;
// State
UIState mState;
};
if (ui->GetState() == UIScreen::EActive)
ui->Update(deltaTime);
ui->Draw(mSpriteShader);
if (mGameState == EGameplay)
if (actor->GetState() == Actor::EActive)
actor->ProcessInput(state);
}
else if (!mUIStack.empty())
mUIStack.back()->ProcessInput(state);
PauseMenu::PauseMenu(Game* game)
:UIScreen(game)
mGame->SetState(Game::EPaused);
SetTitle("PAUSED");
PauseMenu::~PauseMenu()
mGame->SetState(Game::EGameplay);
}
Finally, the HandleKeyPress function closes the pause
menu if the player presses the Escape key:
UIScreen::HandleKeyPress(key);
if (key == SDLK_ESCAPE)
Close();
Buttons
Most menus in games also have buttons that the
player can interact with. For example, a pause menu
might have buttons for resuming the game, quitting
the game, configuring options, and so on. Because
different UI screens may need buttons, it makes
sense to add this support to the base UIScreen
class.
class Button
public:
~Button();
void OnClick();
// Getters/setters
// ...
private:
std::function<void()> mOnClick;
std::string mName;
Vector2 mPosition;
Vector2 mDimensions;
bool mHighlighted;
};
return !no;
if (mOnClick)
mOnClick();
std::function<void()> onClick)
Vector2 dims(static_cast<float>(mButtonOn->GetWidth()),
static_cast<float>(mButtonOn->GetHeight()));
Button* b = new Button(name, mFont, onClick, mNextButtonPos, dims);
mButtons.emplace_back(b);
You also want the player to use the mouse to select and
click on buttons. Recall that the game uses a relative
mouse mode so that that mouse movement turns the
camera. To allow the player to highlight and click on
buttons, you need to disable this relative mouse mode.
You can leave responsibility for this to the PauseMenu
class; in the constructor, it disables relative mouse mode,
and then it reenables it in the destructor. This way, when
the player returns to gameplay, the mouse can once
again rotate the camera.
if (!mButtons.empty())
int x, y;
SDL_GetMouseState(&x, &y);
- mousePos.y;
if (b->ContainsPoint(mousePos))
{
b->SetHighlighted(true);
else
b->SetHighlighted(false);
AddButton("Resume", [this]() {
Close();
});
AddButton("Quit", [this]() {
mGame->SetState(Game::EQuit);
});
Dialog Boxes
For certain menu actions, such as quitting the game,
it’s preferable to show the player a confirmation
dialog box. This way, if the player clicks on the first
button by mistake, he or she can still correct the
mistake. Using a UI screen stack makes it easy to
transfer control from one UI screen (such as the
pause menu) to a dialog box. In fact, you can
implement the dialog box with all the existing
UIScreen functionality. To do this, you can make a
new subclass of UIScreen called DialogBox.
std::function<void()> onOK)
:UIScreen(game)
mBackground = mGame->GetRenderer()->GetTexture("Assets/DialogBG.png");
// Setup buttons
AddButton("OK", [onOK]() {
onOK();
});
AddButton("Cancel", [this]() {
Close();
});
AddButton("Quit", [this]() {
[this]() {
mGame->SetState(Game::EQuit);
});
});
Figure 11.3 shows this dialog box for quitting the game.
HUD ELEMENTS
The types of elements in the HUD vary depending on
the game. Such elements include showing hit points
or ammo count, a score, or an arrow that points to
the next objective. This section looks at two types of
elements that are common for first-person games: a
crosshair (or aiming reticule) and a radar that
shows target positions.
bool mTargetEnemy;
mTargetEnemy = false;
mGame->GetRenderer()->GetScreenDirection(start, dir);
// Segment cast
PhysWorld::CollisionInfo info;
if (mGame->GetPhysWorld()->SegmentCast(l, info))
if (tc->GetOwner() == info.mActor)
mTargetEnemy = true;
break;
Adding Radar
A game may have a radar that displays nearby
enemies (or other objects) within a certain radius of
the player. You can represent these enemies on the
radar with blips (which look like dots or circles on
the radar). This way, the player can get a sense of
whether there are enemies around. Some games
always show enemies on the radar, while others show
enemies only under certain conditions (such as if the
enemy recently fired a weapon). However, these
conditions would only be an extension of a basic
approach that shows all enemies.
You can leverage existing code and say that any actor
that has a TargetComponent should also appear on the
radar.
std::vector<Vector2> mBlips;
// Adjust range of radar and radius
float mRadarRange;
float mRadarRadius;
mBlips.clear();
blipPos *= mRadarRadius/mRadarRange;
// Rotate blipPos
mBlips.emplace_back(blipPos);
// Blips
"TextMap":{
"PauseTitle": "PAUSED",
"ResumeButton": "Resume",
"QuitButton": "Quit",
"OKButton": "OK",
"CancelButton": "Cancel"
actualText.c_str(), sdlColor);
tip
If the game code has finalized English text, a quick hack to localize the text is
to use the finalized English text as the text key. This way, you don’t have to
track down every single non-localized string usage in the code. However, this
can be dangerous if someone later changes the English strings in the code,
thinking that this will change the text onscreen!
SUPPORTING MULTIPLE
RESOLUTIONS
For PC and mobile games, it’s very common to have
players with different screen resolutions. On a PC,
common monitor resolutions include 1080p
(1920×1080), 1440p (2560×1440), and 4K
(3840×2160). On mobile platforms, there are a
staggering number of different device resolutions.
Although the Renderer class currently supports
creating the window at different resolutions, the UI
code in this chapter assumes a fixed resolution.
GAME PROJECT
This chapter’s game project demonstrates all the
features discussed in this chapter except for
supporting multiple resolutions. The Game class has
a UI stack, along with a UIScreen class, a
PauseMenu class, and a DialogBox class. The HUD
demonstrates both the aiming reticule and the radar.
The code also implements text localization. The code
is available in the book’s GitHub repository, in the
Chapter11 directory. Open Chapter11-
windows.sln in Windows and Chapter11-
mac.xcodeproj on Mac.
SUMMARY
This chapter provides a high-level overview of the
challenges involved in implementing the user
interface in code. Using the SDL TTF library is a
convenient way to render fonts, as it can load in
TrueType fonts and then render the text to a texture.
In the UI stack system, you represent each unique UI
screen as an element on the UI stack. At any point in
time, only the topmost screen on the UI might
receive input from the player. You can extend this
system to support buttons as well as dialog boxes.
ADDITIONAL READING
Desi Quintans’ short article gives examples of good
and bad game UI, from a design perspective. Luis
Sempé, a UI programmer for games including Deus
Ex: Human Revolution, has written the only book
solely dedicated to programming UIs for games. (In
the interest of full disclosure, I worked with the
author many years ago.) Finally, Joel Spolsky’s book
is for UI design in general, but it provides insight
into how to create an effective UI.
EXERCISES
In this chapter’s exercises, you explore adding a main
menu as well as making changes to the game’s HUD.
Exercise 11.1
Create a main menu. To support this, the game class
needs a new state called EMainMenu. The game
should first start in this state and display a UI screen
with the menu options Start and Quit. If the player
clicks Start, the game should switch to gameplay. If
the player clicks Quit, the menu should show a dialog
box confirming that the player wants to quit.
Exercise 11.2
Modify the radar so that it uses different blip
textures, depending on whether the actor is above or
below the player. Use the provided BlipUp.png and
BlipDown.png textures to show these different
states. Testing this feature may require changing the
positions of some of the target actors in order to
more clearly distinguish the height.
Exercise 11.3
Implement an onscreen 2D arrow that points to a
specific actor. Create a new type of actor called
ArrowTarget and place it somewhere in the game
world. Then, in the HUD, compute the vector from
the player to the ArrowTarget. Use the angle
between this and the player’s forward on the x-y
plane to determine the angle to rotate the onscreen
2D arrow. Finally, add code to
UIScreen::DrawTexture to supporting rotating a
texture (with a rotation matrix).
CHAPTER 12
SKELETAL ANIMATION
note
Because skeletal animation has bones and vertices that deform along the
bones, some call this technique skinned animation. The “skin” in this case
is the model’s vertices.
Similarly, the terms bone and joint, though different in the context of anatomy,
are interchangeable terms in the context of skeletal animation.
Suppose that you store local pose data for all the bones.
One way to represent position and orientation is with a
transform matrix. Given a point in the bone’s coordinate
space, this local pose matrix would transform the point
into the parent’s coordinate space.
struct BoneTransform
Quaternion mRotation;
Vector3 mTranslation;
// Convert to matrix
};
struct Bone
BoneTransform mLocalBindPose;
std::string mName;
int mParent;
};
"version":1,
"bonecount":68,
"bones":[
"name":"root",
"parent":-1,
"bindpose":{
"rot":[0.000000,0.000000,0.000000,1.000000],
"trans":[0.000000,0.000000,0.000000]
},
"name":"pelvis",
"parent":0,
"bindpose":{
"rot":[0.001285,0.707106,-0.001285,-0.707106],
"trans":[0.000000,-1.056153,96.750603]
},
// ...
Animation Data
Much the way you describe the bind pose of a
skeleton in terms of local poses for each of the bones,
you can describe any arbitrary pose. More formally,
the current pose of a skeleton is just the set of local
poses for each bone. An animation is then simply a
sequence of poses played over time. As with the bind
pose, you can convert these local poses into global
pose matrices for each bone, as needed.
BoneTransform retVal;
retVal.mTranslation = Vector3::Lerp(a.mTranslation,
b.mTranslation, f);
return retVal;
Skinning
Skinning involves associating vertices in the 3D
model with one or more bones in the corresponding
skeleton. (This is different from the term skinning in
a non-animation context.) Then, when drawing a
vertex, the position and orientation of any associated
bones influence the position of the vertex. Because
the skinning of a model does not change during the
game, the skinning information is an attribute of
each vertex.
In a typical implementation of skinning, each vertex can
have associations with up to four different bones. Each of
these associations has a weight, which designates how
much each of the four bones influences the vertex. These
weights must sum to one. For example, the spine and left
hip bone might influence a vertex on the lower-left part
of the torso of the character. If the vertex is closer to the
spine, it might have a weight of 0.7 for the spline bone
and 0.3 for the hip bone. If a vertex has only one bone
that influences it, as is common, then that one bone
simply has a weight of 1.0.
Any vertex that’s influenced by the spine can then use the
precomputed matrix from the palette. For the case of the
vertex solely influenced by the spine, its transformed
position is as follows:
IMPLEMENTING SKELETAL
ANIMATION
With the mathematical foundations established, you
can now add skeletal animation support to the game.
First, you add support for the additional vertex
attributes that a skinned model needs (bone
influences and weights), and then you draw the
model in bind pose. Next, you add support for
loading the skeleton and compute the inverse bind
pose for each bone. Then, you can calculate the
current pose matrices of an animation and save the
matrix palette. This allows you to draw the model in
the first frame of an animation. Finally, you add
support for updating the animation based on delta
time.
enum Layout
PosNormTex,
PosNormSkinTex
};
Then, you modify the VertexArray constructor so that
it takes in a Layout as a parameter. Then, in the code
for the constructor you check the layout to determine
how to define the vertex array attributes. For the case of
PosNormTex, you use the previously written vertex
attribute code. Otherwise, if the layout is
PosNormSkinTex, you define the layout as in Listing
12.2.
if (layout == PosNormTex)
// Position is 3 floats
glEnableVertexAttribArray(0);
// Normal is 3 floats
glEnableVertexAttribArray(1);
reinterpret_cast<void*>(sizeof(float) * 3));
// Skinning bones (keep as ints)
glEnableVertexAttribArray(2);
reinterpret_cast<void*>(sizeof(float) * 6));
glEnableVertexAttribArray(3);
reinterpret_cast<void*>(sizeof(float) * 6 + 4));
// Texture coordinates
glEnableVertexAttribArray(4);
reinterpret_cast<void*>(sizeof(float) * 6 + 8));
public:
};
Next, you load the skinning vertex shader and the Phong
fragment shaders in Renderer::LoadShader and save
the resulting shader program in a mSkinnedShader
member variable.
Finally, in Renderer::Draw, after drawing the regular
meshes, you draw all the skeletal meshes. The code is
almost identical to the regular mesh drawing code from
Chapter 6, except you use the skeletal mesh shader:
Click here to view code image
mSkinnedShader->SetActive();
SetLightUniforms(mSkinnedShader);
if (sk->GetVisible())
sk->Draw(mSkinnedShader);
With all this code in place, you can now draw a model
with skinning vertex attributes, as in Figure 12.4. The
character model used in this chapter is the Feline
Swordsman model created by Pior Oberson. The model
file is CatWarrior.gpmesh in the Assets directory for
this chapter’s game project.
Loading a Skeleton
Now that the skinned model is drawing, the next step
is to load the skeleton. The gpskel file format simply
defines the bones, their parents, and the local pose
transform for every bone in bind pose. To
encapsulate the skeleton data, you can declare a
Skeleton class, as shown in Listing 12.4.
class Skeleton
public:
struct Bone
BoneTransform mLocalBindPose;
std::string mName;
int mParent;
};
// Getter functions
{ return mGlobalInvBindPoses; }
protected:
void ComputeGlobalInvBindPose();
private:
std::vector<Bone> mBones;
std::vector<Matrix4> mGlobalInvBindPoses;
};
void Skeleton::ComputeGlobalInvBindPose()
mGlobalInvBindPoses.resize(GetNumBones());
// The global bind pose for root is just the local bind pose
mGlobalInvBindPoses[0] = mBones[0].mLocalBindPose.ToMatrix();
// Each remaining bone's global bind pose is its local pose
mGlobalInvBindPoses[i] = localMat *
mGlobalInvBindPoses[mBones[i].mParent];
mGlobalInvBindPoses[i].Invert();
{
"version":1,
"sequence":{
"frames":19,
"duration":0.600000,
"bonecount":68,
"tracks":[
"bone":0,
"transforms":[
"rot":[-0.500199,0.499801,-0.499801,0.500199],
"trans":[0.000000,0.000000,0.000000]
},
"rot":[-0.500199,0.499801,-0.499801,0.500199],
"trans":[0.000000,0.000000,0.000000]
},
// ...
],
// Additional tracks for each bone
// ...
class Animation
public:
// Fills the provided vector with the global (current) pose matrices
private:
size_t mNumBones;
size_t mNumFrames;
float mDuration;
float mFrameDuration;
std::vector<std::vector<BoneTransform>> mTracks;
};
To compute the global pose for each bone, you follow the
same approach discussed before. You first set the root
bone’s global pose, and then each other bone’s global
pose is its local pose multiplied by its parent’s global
pose. The first index of mTracks corresponds to the
bone index, and the second index corresponds to the
frame in the animation. So, this first version of
GetGlobalPoseAtTime hard-codes the second index
to 0 (the first frame of the animation), as shown in
Listing 12.8.
if (outPoses.size() != mNumBones)
outPoses.resize(mNumBones);
// For now, just compute the pose for every bone at frame 0
if (mTracks[0].size() > 0)
// The global pose for the root is just its local pose
outPoses[0] = mTracks[0][frame].ToMatrix();
else
outPoses[0] = Matrix4::Identity;
// Now compute the global pose matrices for every other bone
if (mTracks[bone].size() > 0)
{
localMat = mTracks[bone][frame].ToMatrix();
struct MatrixPalette
Matrix4 mEntry[MAX_SKELETON_BONES];
};
// Matrix palette
MatrixPalette mPalette;
float mAnimPlayRate;
// Current time in the animation
float mAnimTime;
void SkeletalMeshComponent::ComputeMatrixPalette()
mSkeleton->GetGlobalInvBindPoses();
std::vector<Matrix4> currentPoses;
mAnimation->GetGlobalPoseAtTime(currentPoses, mSkeleton,
mAnimTime);
{
// Global inverse bind pose matrix times current pose matrix
float playRate)
mAnimation = anim;
mAnimTime = 0.0f;
mAnimPlayRate = playRate;
ComputeMatrixPalette();
return mAnimation->GetDuration();
}
Now you can load the animation data, compute the pose
matrices for frame 0 of the animation, and calculate the
matrix palette. However, the current pose of the
animation still won’t show up onscreen because the
vertex shader needs modification.
Once the vertex shader has a matrix palette, you can then
apply the skinning calculations from earlier in the
chapter. Remember that because each vertex has up to
four different bone influences, you must calculate four
different positions and blend between them based on the
weight of each bone. You do this before transforming the
point into world space because the skinned vertex is still
in object space (just not in the bind pose).
Listing 12.10 shows the main function for the skinning
vertex shader program. Recall that inSkinBones and
inSkinWeights are the four bone indices and the four
bone weights. The accessors for x, y, and so on are simply
accessing the first bone, the second bone, and so on.
Once you calculate the interpolated skinned position of
the vertex, you transform the point to world space and
then projection space.
void main()
fragWorldPos = skinnedPos.xyz;
skinnedNormal =
fragTexCoord = inTexCoord;
shader->SetMatrixUniforms("uMatrixPalette", &mPalette.mEntry[0],
MAX_SKELETON_BONES);
Updating Animations
The final step to get a working skeletal animation
system is to update the animation every frame, based
on delta time. You need to change the Animation
class so that it correctly gets the pose based on the
time in the animation, and you need to add an
Update function to SkeletalMeshComponent.
if (outPoses.size() != mNumBones)
outPoses.resize(mNumBones);
if (mTracks[0].size() > 0)
// Interpolate between the current frame's pose and the next frame
mTracks[0][nextFrame], pct);
outPoses[0] = interp.ToMatrix();
else
outPoses[0] = Matrix4::Identity;
{
Matrix4 localMat; // (Defaults to identity)
if (mTracks[bone].size() > 0)
BoneTransform interp =
BoneTransform::Interpolate(mTracks[bone][frame],
mTracks[bone][nextFrame], pct);
localMat = interp.ToMatrix();
{ mAnimTime -= mAnimation->GetDuration(); }
ComputeMatrixPalette();
GAME PROJECT
This chapter’s game project implements skeletal
animation as described in this chapter. It includes
the SkeletalMeshComponent, Animation, and
Skeleton classes, as well as the skinned vertex
shader. The code is available in the book’s GitHub
repository, in the Chapter12 directory. Open
Chapter12-windows.sln in Windows and
Chapter12-mac.xcodeproj on Mac.
SUMMARY
This chapter provides a comprehensive overview of
skeletal animation. In skeletal animation, a character
has a rigid skeleton that animates, and vertices act
like a skin that deforms with this skeleton. The
skeleton contains a hierarchy of bones, and every
bone except for the root has a parent bone.
ADDITIONAL READING
Jason Gregory takes an in-depth look at more
advanced topics in animation systems, such as
blending animations, compressing animation data,
and inverse kinematics.
Exercise 12.1
It’s useful for a game to get the position of a bone as
an animation plays. For example, if a character holds
an object in his hand, you need to know the position
of the bone as the animation changes. Otherwise, the
character will no longer hold the item properly!
INTERMEDIATE
GRAPHICS
Mipmapping
In mipmapping, rather than having a single source
texture, you generate a series of additional textures,
called mipmaps, that are at lower resolutions than
the source texture. For example, if the source texture
has a resolution of 256×256, you may generate
mipmaps of 128×128, 64×64, and 32×32. Then,
when it’s time to draw the texture onscreen, the
graphics hardware can select the mipmap texture
that yields a texel density closest to 1:1. While
mipmapping doesn’t improve texture quality when
you’re magnifying a texture to a resolution higher
than the original resolution, it greatly improves the
quality when you’re reducing the size of a texture.
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_LINEAR);
Anisotropic Filtering
Although mipmapping greatly reduces sampling
artifacts in most instances, textures viewed at
oblique angles relative to the camera will appear very
blurry. This is noticeable especially with floor
textures, as shown in Figure 13.6(b). Anisotropic
filtering mitigates this by sampling additional
points on the texture when it is viewed at an oblique
angle. For example, 16x anisotropic filtering means
that there are 16 different samples for the texel color.
if (GLEW_EXT_texture_filter_anisotropic)
GLfloat largest;
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &largest);
// Enable it
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT,
largest);
}
note
For high-quality reflections, such as for a large mirror, you must render the
scene from the perspective of the surface. However, if the game scene
contains many surfaces that need low-quality reflections, rendering the
scene from the perspective of each of these surfaces is too expensive. In this
case, you can instead generate a single reflection map of the entire scene.
Then, for every low-quality reflective surface, you sample from this reflection
map to give the illusion of a reflection. Although the quality is significantly
lower than when rendering from the perspective of the reflective surface, it is
sufficient for surfaces that only need low-quality reflections.
This book does not cover how to implement reflection maps, but you can
consult the “Additional Reading” section at the end of this chapter for further
information on the topic.
{
mWidth = width;
mHeight = height;
glGenTextures(1, &mTextureID);
glBindTexture(GL_TEXTURE_2D, mTextureID);
GL_FLOAT, nullptr);
glGenFramebuffers(1, &mMirrorBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, mMirrorBuffer);
GLuint depthBuffer;
glGenRenderbuffers(1, &depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_RENDERBUFFER, depthBuffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
mMirrorTexture->GetTextureID(), 0);
glDrawBuffers(1, drawBuffers);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
glDeleteFramebuffers(1, &mMirrorBuffer);
mMirrorTexture->Unload();
delete mMirrorTexture;
mMirrorTexture = nullptr;
return false;
}
return true;
float viewportScale)
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glViewport(0, 0,
static_cast<int>(mScreenWidth * viewPortScale),
static_cast<int>(mScreenHeight * viewPortScale)
);
// Clear color buffer/depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// ...
// ...
// ...
// ...
SDL_GL_SwapWindow(mWindow);
static_cast<float>(texture->GetWidth()) * scale,
yScale,
1.0f);
texture->SetActive();
// Draw quad
DEFERRED SHADING
Recall that the Phong lighting implemented in
Chapter 6 performs the lighting calculations for each
fragment when drawing a mesh. The pseudocode for
this type of lighting calculation is as follows:
Click here to view code image
note
Storing the world position in the G-buffer makes your later calculations
simpler—but at the expense of increased memory and rendering bandwidth
usage.
It’s possible to reconstruct the world position at a pixel from the depth buffer
and the view-projection matrix, which eliminates the need for the world
position in the G-buffer. Consult Phil Djonov’s article in the “Additional
Reading” section at the end of the chapter to learn how to do these
calculations.
class GBuffer
public:
enum Type
EDiffuse = 0,
ENormal,
EWorldPos,
NUM_GBUFFER_TEXTURES
};
GBuffer();
~GBuffer();
void Destroy();
void SetTexturesActive();
private:
// Framebuffer object ID
};
// ...
// ...
mTextures.emplace_back(tex);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i,
tex->GetTextureID(), 0);
attachments.emplace_back(GL_COLOR_ATTACHMENT0 + i);
glDrawBuffers(static_cast<GLsizei>(attachments.size()),
attachments.data());
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) !=
GL_FRAMEBUFFER_COMPLETE)
Destroy();
return false;
return true;
note
Although GL_RGB32F yields a lot of precision for the values in the G-buffer,
the trade-off is that the G-buffer takes up a significant amount of graphics
memory. Three GL_RGB32F textures at a resolution of 1024×768 (your
screen resolution) takes up 27 MB of memory on the GPU. To reduce
memory usage, many games instead use GL_RGB16F (three half-precision
floats), which would cut the memory usage in half.
You could further optimize the memory usage with other tricks. For example,
because a normal is unit length, given the x and y components and the sign
of the z component, you can solve for the z component. This means you
could store the normals in GL_RG16F format (two half-precision floats) and
later derive the z component. In the interest of simplicity, this chapter does
not implement these optimizations, but you should know that many
commercial games use such tricks.
if (!mGBuffer->Create(width, height))
return false;
#version 330
void main()
outNormal = fragNormal;
outWorldPos = fragWorldPos;
GRAPHICS DEBUGGERS
For Windows and Linux, the best graphics debugger that supports
OpenGL is RenderDoc (https://renderdoc.org), an open source tool
created by Baldur Karlsson. In addition to OpenGL, it supports
debugging for Vulkan as well as Microsoft Direct3D 11 and 12 (the latter
two only on Windows). Unfortunately, at this writing, RenderDoc has no
macOS support.
For macOS users, Intel Graphics Performance Analyzers (GPA) is a
great alternative. See https://software.intel.com/en-us/gpa.
Global Lighting
Now that the game is writing surface properties to
the G-buffer, the next step is to use these properties
to display a fully lit scene. This section focuses on
global lights such as the ambient and a global
directional light. The basic premise is to draw a quad
the size of the screen to the default framebuffer. For
each fragment in this quad, you sample surface
properties from the G-buffer. Then, using these
surface properties, you can compute the same Phong
lighting equations from Chapter 6 to light the
fragment.
#version 330
// ...
void main()
// ...
if (!mGGlobalShader->Load("Shaders/GBufferGlobal.vert",
"Shaders/GBufferGlobal.frag"))
return false;
mGGlobalShader->SetIntUniform("uGDiffuse", 0);
mGGlobalShader->SetIntUniform("uGNormal", 1);
mGGlobalShader->SetIntUniform("uGWorldPos", 2);
mGGlobalShader->SetMatrixUniform("uViewProj", spriteViewProj);
-mScreenHeight, 1.0f);
mGGlobalShader->SetMatrixUniform("uWorldTransform", gbufferWorld);
void GBuffer::SetTexturesActive()
mTextures[i]->SetActive(i);
}
void Renderer::DrawFromGBuffer()
{
glDisable(GL_DEPTH_TEST);
mGGlobalShader->SetActive();
mSpriteVerts->SetActive();
mGBuffer->SetTexturesActive();
SetLightUniforms(mGGlobalShader, mView);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
DrawFromGBuffer();
// ...
class PointLightComponent
public:
~PointLightComponent();
// Diffuse color
Vector3 mDiffuseColor;
// Radius of light
float mInnerRadius;
float mOuterRadius;
};
struct PointLight
// Position of light
vec3 mWorldPos;
// Diffuse color
vec3 mDiffuseColor;
float mInnerRadius;
float mOuterRadius;
};
void main()
vec3 N = normalize(gbufferNorm);
if (NdotL > 0)
{
// Get the distance between the light and the world pos
uPointLight.mOuterRadius, dist);
glBindFramebuffer(GL_READ_FRAMEBUFFER, mGBuffer->GetBufferID());
0, 0, width, height,
GL_DEPTH_BUFFER_BIT, GL_NEAREST);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_FALSE);
// Set the point light shader and mesh as active
mGPointLightShader->SetActive();
mPointLightMesh->GetVertexArray()->SetActive();
mGPointLightShader->SetMatrixUniform("uViewProj",
mView * mProjection);
mGBuffer->SetTexturesActive();
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
p->Draw(mGPointLightShader, mPointLightMesh);
Then you activate the shader for the point lights as well
as the corresponding point light mesh. You need to set
the view-projection matrix just as for any other object
rendered in the world to make sure the point light has
the correct location onscreen. You also need to bind the
G-buffer textures to their respective slots.
Finally, you loop over all the point lights and call the
Draw function on each point light. The code for
PointLightComponent::Draw, shown in Listing
13.15, doesn’t look that much different from the code for
drawing any other mesh. For the world transform
matrix, you need to scale based on the outer radius of the
light. You divide by the radius of the mesh because the
point light mesh does not have a unit radius. The
translation is just based on the position of the light,
which comes from the owning actor.
Once you draw all the point light meshes, for every
fragment you calculate the contribution of the point light
to the color of the fragment. You then add this additional
light color to the already existing color from the global
lighting pass.
mOuterRadius / mesh->GetRadius());
shader->SetMatrixUniform("uWorldTransform", worldTransform);
shader->SetVectorUniform("uPointLight.mWorldPos", mOwner->GetPosition()
shader->SetVectorUniform("uPointLight.mDiffuseColor", mDiffuseColor);
shader->SetFloatUniform("uPointLight.mInnerRadius", mInnerRadius);
shader->SetFloatUniform("uPointLight.mOuterRadius", mOuterRadius);
glDrawElements(GL_TRIANGLES, mesh->GetVertexArray()->GetNumIndices(),
GL_UNSIGNED_INT, nullptr);
GAME PROJECT
This chapter’s game project provides the full
implementation of deferred shading. In addition, it
uses both mipmapping and anisotropic aliasing to
improve texture quality. The project includes the
mirror texture that’s forward rendered. The code is
available in the book’s GitHub repository, in the
Chapter13 directory. Open Chapter13-
windows.sln in Windows and Chapter13-
mac.xcodeproj on Mac.
EXERCISES
In this chapter’s exercises, you explore improving the
deferred shading techniques covered in the latter half
of the chapter.
Exercise 13.1
Add support for the specular component to both the
global G-buffer lighting (the directional light) and
the point lights. To do this, first you need a new
texture in the G-buffer that stores the specular power
of the surface. Add this new texture to the relevant
parts of code (both in C++ and in GLSL).
Exercise 13.2
Adding a new type of light to deferred shading
requires a new type of light geometry. Add support
for spotlights. To do so, you need to create a
SpotLightComponent as well as a corresponding
shader to draw these lights after the point lights.
For all the reasons just listed, you need a file format
that’s more structured. As with the rest of the book, in
this chapter you once again use a text-based JSON
format for data. However, this chapter also explores the
trade-offs that any text format makes, as well as
techniques needed for binary file formats.
{
"version": 1,
"globalProperties": {
"directionalLight": {
class LevelLoader
public:
};
rapidjson::Document& outDoc)
std::ios::binary | std::ios::ate);
if (!file.is_open())
return false;
file.seekg(0, std::ios::beg);
// Create a vector of size + 1 (for null terminator)
file.read(bytes.data(), static_cast<size_t>(fileSize));
outDoc.Parse(bytes.data());
if (!outDoc.IsObject())
return false;
return true;
rapidjson::Document doc;
if (!LoadJSON(fileName, doc))
{
return false;
if (itr == inObject.MemberEnd())
{
return false;
if (!property.IsInt())
return false;
outInt = property.GetInt();
return true;
int version = 0;
if (!JsonHelper::GetInt(doc, "version", version) ||
version != LevelVersion)
return false;
Listing 14.4
LevelLoader::LoadGlobalProperties
Implementation
Click here to view code image
Vector3 ambient;
game->GetRenderer()->SetAmbientLight(ambient);
if (dirObj.IsObject())
}
You then add a call to LoadGlobalProperties in
LoadLevel, immediately after the validation code for
the level file version:
if (globals.IsObject())
LoadGlobalProperties(game, globals);
LevelLoader::LoadLevel(this, "Assets/Level0.gplevel");
Loading Actors
Loading in the actors means the JSON file needs an
array of actors, and each actor has property
information for that actor. However, you need some
way to specify which type of Actor you need
(because there are subclasses). In addition, you want
to avoid having a long set of conditional checks in the
level loading code to determine which Actor
subclass to allocate.
// ...
"actors": [
"type": "TargetActor",
"properties": {
},
"type": "TargetActor",
"properties": {
},
"type": "TargetActor",
"properties": {
}
Assuming for a moment that you have a method to
construct an actor of a specific type, you also need to be
able to load properties for the actor. The simplest
approach is to create a virtual LoadProperties
function in the base Actor class, shown in Listing 14.6.
std::string state;
if (state == "active")
SetState(EActive);
SetState(EPaused);
}
else if (state == "dead")
SetState(EDead);
ComputeWorldTransform();
Actor::LoadProperties(inObj);
// ...
Now that you have a way to load properties, the next step
is to solve the issue of constructing an actor of the correct
type. One approach is to create a map where the key is
the string name of the actor type, and the value is a
function that can dynamically allocate an actor of that
type. The key is straightforward because it’s just a string.
For the value, you can make a static function that
dynamically allocates an actor of a specific type. To avoid
having to declare a separate function in each subclass of
Actor, you can instead create a template function like
this in the base Actor class:
T* t = new T(game);
t->LoadProperties(inObj);
return t;
>;
{ "Actor", &Actor::Create<Actor> },
{ "BallActor", &Actor::Create<BallActor> },
{ "FollowActor", &Actor::Create<FollowActor> },
{ "PlaneActor", &Actor::Create<PlaneActor> },
{ "TargetActor", &Actor::Create<TargetActor> },
};
With the map set up, you can now create a LoadActors
function, as in Listing 14.7. Here, you loop over the
actors array in the JSON file and get the type string for
the actor. You use this type to then look up in
sActorFactoryMap. If you find the type, you call the
function stored as the value in the map (iter-
>second), which in turn calls the correct version of
Actor::Create. If you don’t find the type, you have a
helpful debug log message output.
if (actorObj.IsObject())
std::string type;
if (JsonHelper::GetString(actorObj, "type", type))
if (iter != sActorFactoryMap.end())
else
if (actors.IsArray())
LoadActors(game, actors);
Loading Components
Loading data for components involves many of the
same patterns as for actors. However, there is one
key difference. Listing 14.8 shows a snippet of the
declaration of two different actors with their
components property set. The base Actor type
does not have any existing components attached to
it. So in this case, the MeshComponent type means
that you must construct a new MeshComponent for
the actor. However, the TargetActor type already
has a MeshComponent, as one is created in the
constructor for TargetActor. In this case, the
properties specified should update the existing
component rather than create a new one. This means
the code for loading components needs to handle
both cases.
"actors": [
"type": "Actor",
"properties": {
"scale": 5.0
},
"components": [
"type": "MeshComponent",
},
"type": "TargetActor",
"properties": { "position": [1450.0, 0.0, 100.0] },
"components": [
"type": "MeshComponent",
TComponent = 0,
TAudioComponent,
TBallMove,
// ...
NUM_COMPONENT_TYPES
};
if (c->GetType() == type)
comp = c;
break;
return comp;
Note that this system also assumes that you won’t have
multiple components of the same type attached to one
actor. If you wanted to have multiple components of the
same type, then GetComponentOfType would
potentially have to return a collection of components
rather than just a single pointer.
Component::LoadProperties(inObj);
std::string meshFile;
SetMesh(mOwner->GetGame()->GetRenderer()->GetMesh(meshFile));
int idx;
mTextureIndex = static_cast<size_t>(idx);
}
The next step is to add a static templated Create
function for Component, which is very similar to the one
in Actor except that the parameters are different. (It
takes in Actor* as the first parameter instead of
Game*.)
>;
static std::unordered_map<std::string,
LevelLoader::sComponentFactoryMap
{ "AudioComponent",
{ Component::TAudioComponent, &Component::Create<AudioComponent>}
},
{ "BallMove",
{ Component::TBallMove, &Component::Create<BallMove> }
},
// ...
};
if (compObj.IsObject())
std::string type;
{
auto iter = sComponentFactoryMap.find(type);
if (iter != sComponentFactoryMap.end())
(iter->second.first);
if (comp == nullptr)
else
comp->LoadProperties(compObj["properties"]);
else
{
if (actorObj.HasMember("components"))
if (components.IsArray())
LoadComponents(actor, components);
}
With all this code in place, you can now load the entire
level from a file, including the global properties, actors,
and any components associated with each actor.
rapidjson::Value v(value);
inObject.AddMember(rapidjson::StringRef(name), v, alloc);
rapidjson::Document doc;
doc.SetObject();
// ...
rapidjson::PrettyWriter<rapidjson::StringBuffer> writer(buffer);
doc.Accept(writer);
std::ofstream outFile(fileName);
if (outFile.is_open())
For now, this function only writes out the version to the
output file. But with this skeleton code, you can start
adding the remaining output.
rapidjson::Value globals(rapidjson::kObjectType);
"Component",
"AudioComponent",
"BallMove",
// Rest omitted
// ...
};
if (mState == EPaused)
state = "paused";
state = "dead";
rapidjson::Value obj(rapidjson::kObjectType);
// Add type
rapidjson::Value props(rapidjson::kObjectType);
// Save properties
actor->SaveProperties(alloc, props);
// Save components
rapidjson::Value components(rapidjson::kArrayType);
inArray.PushBack(obj, alloc);
note
With some work, you could create a single serialize function that both load
and saves properties. This way, you could avoid having to update two
different functions every time you add a new property to an actor or a
component.
For the best of both worlds, you may want to use text
files during development (at least for some members of
the team) and then binary files in optimized builds. This
section explores how to create a binary mesh file format.
To keep things simple, in the code that loads in the
gpmesh JSON format, you will first check if a
corresponding gpmesh.bin file exists. If it does, you’ll
load that in instead of the JSON file. If it doesn’t exist,
the game will create the binary version file so that next
time you run the game, you can load the binary version
instead of the text version.
struct MeshBinHeader
// Version
uint32_t mNumTextures = 0;
uint32_t mNumVerts = 0;
uint32_t mNumIndices = 0;
};
With the file format decided on, you can then create the
SaveBinary function, as shown in Listing 14.15. This
function takes in a lot of parameters because there’s a lot
of information needed to create the binary file. In total,
you need the filename, a pointer to the vertex buffer, the
number of vertices, the layout of these vertices, a pointer
to the index buffer, the number of indices, a vector of the
texture names, the bounding box of the mesh, and the
radius of the mesh. With all these parameters, you can
save the file.
{
// Create header struct
MeshBinHeader header;
header.mLayout = layout;
header.mNumTextures =
static_cast<unsigned>(textureNames.size());
header.mNumVerts = numVerts;
header.mNumIndices = numIndices;
header.mBox = box;
header.mRadius = radius;
| std::ios::binary);
if (outFile.is_open())
outFile.write(reinterpret_cast<char*>(&header), sizeof(header));
outFile.write(reinterpret_cast<char*>(&nameSize),
sizeof(nameSize));
outFile.write("\0", 1);
// Write vertices
outFile.write(reinterpret_cast<const char*>(verts),
numVerts * vertexSize);
// Write indices
outFile.write(reinterpret_cast<const char*>(indices),
numIndices * sizeof(uint32_t));
}
The code in Listing 14.15 does quite a lot. First, you
create an instance of the MeshBinHeader struct and fill
in all its members. Next, you create a file for output and
open it in binary mode. If this file successfully opens, you
can write to it.
Then you write the header of the file with the write
function call. The first parameter write expects is a
char pointer, so in many cases it’s necessary to cast a
different pointer to a char*. This requires a
reinterpret_cast because a MeshBinHeader*
cannot directly convert to a char*. The second
parameter to write is the number of bytes to write to the
file. Here, you use sizeof to specify the number of bytes
corresponding to the size of MeshBinHeader. In other
words, you are writing sizeof(header) bytes starting
at the address of header. This is a quick way to just
write the entire struct in one fell swoop.
warning
WATCH OUT FOR ENDIANNESS: The order in which a CPU platform saves
values larger than 1 byte is called endianness. The method used here to
read and write MeshBinHeader will not work if the endianness of the
platform that writes out the gpmesh.bin file is different from the endianness
of the platform that reads the gpmesh.bin file.
Although most platforms today are little endian, endianness can still be a
potential issue with code of this style.
Next, you loop through all the texture names and write
each of them to the file. For each filename, you first write
the number of characters in the filename (plus one for
the null terminator) and then write the string itself. Note
that this code assumes that a filename can’t be larger
than 64 KB, which should be a safe assumption. The
reason you write the number of characters and the name
is for loading. The header only stores the number of
textures and not the size of each string. Without storing
the number of characters, at load time you would have
no way of knowing how many bytes to read for the
filename.
After writing all the filenames, you then write all the
vertex and index buffer data directly to the file. You don’t
need to include the sizes here because they already
appear in the header. For the vertex data, the number of
bytes is the number of vertices times the size of each
vertex. Luckily, you can use a VertexArray helper
function to get the size of each vertex based on layout.
For the index data, you have a fixed size (32-bit indices),
so the total number of bytes is easier to calculate.
Renderer* renderer)
if (inFile.is_open())
MeshBinHeader header;
inFile.read(reinterpret_cast<char*>(&header), sizeof(header));
return false;
// Read in vertices/indices
inFile.read(reinterpret_cast<char*>(indices),
header.mNumIndices * sizeof(uint32_t));
// Delete verts/indices
delete[] verts;
delete[] indices;
mBox = header.mBox;
mRadius = header.mRadius;
return true;
return false;
First, you open the file for reading in binary mode. Next,
you read in the header via the read function. Much as
with write, read takes in a char* for where to write
and the number of bytes to read from the file. Next, you
verify that the signature and version in the header match
what is expected; if they don’t, you can’t load the file.
After this, you read in all the texture filenames and load
them, though we omit that code from Listing 14.16 to
save space. Next, you allocate memory to store the vertex
and index buffers, and you use read to grab the data
from the file. Once you have the vertex and index data,
you can construct the VertexArray object and pass in
all the information it needs. You need to make sure to
clean up the memory and set the mBox and mRadius
members before returning.
mFileName = fileName;
return true;
// ...
GAME PROJECT
This chapter’s game project implements the systems
discussed in this chapter. Everything loads from a
gplevel file, and pressing the R key saves the
current state of the world into
Assets/Saved.gplevel. The project also
implements the binary saving and loading of mesh
files in the .gpmesh.bin format. The code is
available in the book’s GitHub repository, in the
Chapter14 directory. Open Chapter14-
windows.sln in Windows and Chapter14-
mac.xcodeproj on Mac.
SUMMARY
This chapter explores how to create level files in
JSON. Loading from a file requires several systems.
First, you create helper functions that wrap the
functionality of the RapidJSON library to easily be
able to write the game’s types to JSON. You then add
code to set global properties, load in actors, and load
in components associated with the actors. To do this,
you need to add some type information to
components, as well as maps that associate names of
types to a function that can dynamically allocate that
type. You also need to create virtual
LoadProperties functions in both Component
and Actor.
You also need to create code to save the game world to
JSON, and you create helper functions to assist with this
process. At a high level, saving the file requires saving all
the global properties first and then looping through all
the actors and components to write their properties. As
with file loading, you have to create virtual
SaveProperties functions in both Component and
Actor.
ADDITIONAL READING
There are no books devoted specifically to level files
or binary data. However, the classic Game
Programming Gems series has some articles on the
topic. Bruno Sousa’s article discusses how to use
resource files, which are files that combine several
files into one. Martin Brownlow’s article discusses
how to create a save-anywhere system. Finally, David
Koenig’s article looks at how to improve the
performance of loading files.
EXERCISES
In this chapter’s first exercise, you need to reduce the
size of the JSON files created by SaveLevel. In the
second exercise you convert the Animation file
format to binary.
Exercise 14.1
One issue with the SaveLevel code is that you write
every property for every actor and all its components.
However, for a specific subclass like TargetActor,
few if any of the properties or components change
after construction.
You can then use this process for all the different types of
actors. To assist with this, RapidJSON provides
overloaded comparison operators. Two
rapidjson::Values are equal only if they have the
same type and contents. This way, you can eliminate
setting at least most of the components (because they
won’t change). It will require a bit more work to do this
on a granular (per-property) level.
Exercise 14.2
Applying the same binary file techniques you used
for the mesh files, create a binary file format for the
animation files. Because all the tracks of bone
transforms are the same size, you can use a format
where, after writing the header, you write the ID for
each track followed by the entire track information.
For a refresher on the animation file format, refer to
Chapter 12, “Skeletal Animation.”
APPENDIX A
INTERMEDIATE C++
REVIEW
References
A reference is a variable that refers to another
variable that already exists. To denote a variable as a
reference, add an & immediately after the type. For
example, here is how you can declare r as a reference
to the already existing integer i:
int i = 20;
int& r = i; // r refers to i
int temp = a;
a = b;
b = temp;
warning
PASS-BY-VALUE IS DEFAULT: By default, all parameters in C++, even
objects, pass by value. In contrast, languages like Java and C# default to
passing objects by reference.
Pointers
To understand pointers, it first helps to remember
the way computers store variables in memory.
During program execution, entering a function
automatically allocates memory for local variables in
a segment of memory called the stack. This means
that all local variables in a function have memory
addresses known to the C++ program.
int* p = &y;
*p = 42;
Arrays
An array is a collection of multiple elements of the
same type. The following code declares an array of 10
integers called a and then sets the first element in
the array (index 0) to 50:
int a[10];
a[0] = 50;
int fib[5] = { 0, 1, 1, 2, 3 };
int array[50];
for (int i = 0; i < 50; i++)
array[i] = 0;
warning
ARRAYS DON’T BOUND CHECK: Requesting invalid indices can lead to
memory corruption and other errors. Several tools exist to help find bad
memory accesses, such as the AddressSanitizer tool available in Xcode.
array[0] 0xF2E0 2
int array[5] = {
2, 4, 6, 8, 10 array[1] 0xF2E4 4
}; array[2] 0xF2E8 6
array[3] 0xF2EC 8
array[4] 0xF2F0 10
int x = 20;
int y = 37;
Swap(&x, &y);
float matrix[4][4];
// Code here...
delete dynamicInt;
delete[] dynArray;
class Circle
public:
// ...
private:
Point mCenter;
float mRadius;
};
class Complex
public:
: mReal(real)
, mImaginary(imaginary)
{ }
private:
float mReal;
float mImaginary;
};
c->Negate();
Destructors
Suppose you needed to dynamically allocate arrays of
integers several times throughout a program. Rather
than manually write this code repeatedly, it might
make sense to encapsulate this functionality inside a
DynamicArray class, as in Listing A.3.
class DynamicArray
public:
DynamicArray(int size)
: mSize(size)
, mArray(nullptr)
private:
int* mArray;
int mSize;
};
DynamicArray scores(50);
DynamicArray::~DynamicArray()
{
delete[] mArray;
Complex c2(c1);
DynamicArray array(50);
DynamicArray otherArray(array);
: mSize(other.mSize)
, mArray(nullptr)
mArray[i] = other.mArray[i];
note
In the C++11 standard, the rule of three expands to the rule of five, as there
are two additional special functions (the move constructor and the move
assignment operator). While this book does use some C++11 features, it
does not use these additional functions.
Operator Overloading
C++ gives programmers the ability to specify the
behavior of built-in operators for custom types. For
example, you can define how the arithmetic
operators work for the Complex class. In the case of
addition, you can declare the + operator as follows:
(a.mImaginary == b.mImaginary);
DynamicArray a1(50);
DynamicArray a2(75);
a1 = a2;
{
// Delete existing data
delete[] mArray;
mSize = other.mSize;
mArray[i] = other.mArray[i];
return *this;
a = b = c;
COLLECTIONS
A collection provides a way to store elements of
data. The C++ Standard Library (STL) provides
many different collections, and so it’s important to
understand when to utilize which collections. This
section discusses the most commonly used
collections.
Big-O Notation
Big-O notation describes the rate at which an
algorithm scales as the problem size scales. This rate
is also known as the time complexity of the
algorithm. You can use Big-O to understand the
relative scaling of specific operations on collections.
For example, an operation with a Big-O of O(1)
means that regardless of the number of elements in
the collection, the operation will always take the
same amount of time. On the other hand, a Big-O of
O(n) means that the time complexity is a linear
function of the number of elements.
Vector
A vector is a dynamic array that automatically
resizes based on the number of elements in the
collection. To insert elements into a vector, use the
push_back (or emplace_back) member function.
This adds an element to the end (back) of the vector.
For example, the following code declares a vector of
floats and then adds three elements at the end of the
vector:
Click here to view code image
std::vector<float> vecOfFloats;
Linked List
A linked list is a collection that stores each element
at a separate location in memory and links them
together with pointers. The std::list collection
allows for insertion to both the front and the back of
the list. Use the push_front (or emplace_front)
function to insert into the front, and push_back (or
emplace_back) for the back. The following code
creates a linked list of integers and inserts a handful
of elements:
Click here to view code image
std::list<int> myList;
myList.push_back(4);
myList.push_back(6);
myList.push_back(8);
myList.push_back(10);
myList.push_front(2);
Reading values from memory is very slow for the CPU, so when it needs
to read a value from memory, it also loads neighboring values into a
high-speed cache. Because elements in a vector are contiguous in
memory, accessing an element at a specific index also loads its
neighboring indices into the cache.
Queues
A queue exhibits first-in, first-out (FIFO)
behavior, much like waiting in a line at a store. With
a queue, you cannot remove elements in any
arbitrary order. With a queue, you must remove
elements in the same order in which they were
added. Although many books use enqueue to
reference insertion into a queue and dequeue to
reference removal from a queue, the implementation
of std::queue uses push (or emplace) for
insertion and pop for removal. To access the element
at the front of the queue, use front.
std::queue<int> myQueue;
myQueue.push(10);
myQueue.push(20);
myQueue.push(30);
myQueue.pop();
10 20 30
Stack
A stack exhibits last-in, first-out (LIFO)
behavior. For example, if you add the elements A, B,
and C to a stack, you can only remove them in the
order C, B, A. You use the push (or emplace)
function to add an element onto the stack and the
pop function to remove an element from the stack.
The top function accesses the element on the “top”
of the stack. The following code shows std::stack
in action:
Click here to view code image
std::stack<int> myStack;
myStack.push(10);
myStack.push(20);
myStack.push(30);
myStack.pop();
30 20 10
As with queue, the major operations for std::stack
all have constant time complexity.
Maps
A map is an ordered collection of {key, value} pairs,
sorted by key. Each key in the map must be unique.
Because a map has both a key type and a value type,
you must specify both types when declaring a map.
The recommended way to add an element to a map is
with the emplace function, which takes in the key
and values as parameters. For example, the following
code creates a std::map of months, where the key is
the number of the month and the value is the string
name of the month:
Click here to view code image
months.emplace(1, "January");
months.emplace(2, "February");
months.emplace(3, "March");
// ...
Hash Maps
While a regular map maintains an ascending order of
the keys, a hash map is unordered. In exchange for
the lack of ordering, insertion, removal, and search
are all O(1). Thus, in cases where you need a map but
don’t need ordering, a hash map yields better
performance than a regular map.
std::list<int> numbers;
numbers.emplace_back(2);
numbers.emplace_back(4);
numbers.emplace_back(6);
iter != numbers.end();
++iter)
iter != numbers.end();
++iter)
You can also use auto for the type when writing a range-
based for loop. However, as with using an explicit type,
this makes a copy of each element. However, you can
also use const and & with auto, if needed.
ADDITIONAL READING
There are many excellent resources available online
to help you learn and practice the fundamentals of
C++. One such website is LearnCPP.com, which
contains a very in-depth progression of topics. If you
prefer traditional books, you should see Stephen
Prata’s book, which provides coverage of the basics.
Eric Roberts’s book covers both the fundamentals of
C++ and relevant data structures.
SYMBOLS
∗ operator, 63
+ operator, 64
= operator, 469
== operator, 469
− operator, 62
NUMBERS
2D coordinate systems, 151
2D graphics, 14–15
drawing, 19–21
filtering analog sticks, 267–269
implementing, 18–21
Initialize function, 19
scrolling backgrounds, 51–53
Shutdown function, 19
sprites, 42
animating, 48–50
drawing, 44–48
2D transformations
rotation matrix, 157–158
scale matrix, 157
translation matrices, 158–159
3D
Actor transform, 184, 189–190
BasicMesh shader, 203–204
calculating view-projection matrix, 200
Euler angles, 185–186
lighting. See lighting
MeshComponent, 204–206
quaternions, 186–187
in code, 188–189
combining rotations, 187
quaternion-to-rotation matrix, 188
rotating vectors, 188
spherical linear interpolation (Slerp), 188
transform matrices, 184–185
transforming clip space
projection matrix, 197–200
view matrix, 196–197
updating vertex attributes, 193–194
z-buffering, 201–203
3D AABB, 308
3D coordinate system, 184
3D meshes, drawing, 195–196
3D models
choosing formats, 191–193
loading, 190
vertex and index buffers, 133–138
3D positional audio, 233–234
setting up listeners, 234–236
4x4 matrices, 184–185
A
A*, 106, 111–113
optimizing, 113
AABB (axis-aligned bounding box), 303–306
AABB contains point tests, 308
AABB versus AABB test, 311–312
acceleration, 79
Actor transform, 3D, 184, 189–190
Actor::LoadProperties, 436–437
actors
adding to world transforms, 159–161
associating with sound events, AudioComponent,
237–238
jumping, 248–251
loading in level files, 435–439
saving in level files, 446–448
Actor::SaveProperties, 446–447
adaptive refresh rate, 18
AddButton, 349
adding
an aiming reticule, 352–354
flipY option to UIScreen::DrawTexture, 406
pitch to first-person cameras, 278–280
point lights, 418
PointLightComponent class, 419
positional functionality to SoundEvent, 236–237
radar, 354–358
shaders to games, 146
springs to follow cameras, 283–285
text maps, 359–361
world transforms, to actors, 159–161
AddInt, 444
addition, vector addition, 63–64
AddSprite, 46
AddTarget, 352
adjacency lists, 99
admissible, heuristics, 104
adversarial, 116
AI (artificial intelligence), 91
designing state machines, 92–93
game trees, 116–118
alpha-beta pruning, 121–124
incomplete game trees, 120–121
minimax, 118–119
graphs, 98–100
BFS (breadth-first search), 100–103
implementing state machines, 93–95
pathfinding 98
Dijkstra’s Algorithm, 113–114
following paths, 114–115
heuristics, 104–105
navigation meshes, 115–116
path nodes, 115
state machine behaviors, 92
states, as classes, 95–98
AIComponent class, 93–95
AIComponent::Update function, 94, 96
AIDeath state, 97
aiming reticule, adding, 352–354
AIPatrol class, 97
AIState, 95
Alarm state, 93
Alert state, 93
alpha, 121
alpha blending, 178–180
alpha values, 14–15
alpha-beta pruning, 121–124
AlphaBetaDecide, 121–122
AlphaBetaMax, 122–123
AlphaBetaMin, 123
ambient light, 207–208
loading global properties, 434–435
analog sticks, 264–267
filtering, 267–269
anchors, 361
angle summation, 309
angles
converting from forward vectors, 67–68
converting to forward vectors, 66–67
determining between two vectors, dot product, 68–70
animating sprites, 48–50
animation, skeletal animation. See skeletal animation
Animation class, 381–382
animation data, 371–372
loading for skeletal animation, 380–385
animations, updating, 386–389
AnimSpriteComponent, 49, 50, 53–54
anisotropic filtering, 399–400
API (application programming interface), 3
Apple macOS, setting up development environments, 2
application programming interface (API), 3
architecture, InputSystem, 251–253
arctangent, 67–68
arrays, 460–461, 462
artificial intelligence. See AI (artificial intelligence)
ASCII, 358–359
AStarScratch, 111
Asteroid constructor, 74–75
Asteroids, 84–86
atan2 function, 67
Attack state, 93
attenuates, 233
attributes
OpenGL, 129
skinning vertex attributes, drawing with, 374–378
vertex attributes, updating, 193–194, 206
audio
3D positional audio, 233–234
setting up listeners, 234–236
audio systems, creating, 224–226
AudioComponent, creating to associate actors with
sound events, 237–238
banks, loading/unloading, 227–229
bootstrapping, 222
buses, 242–243
Doppler effect, 240–241
effects, 241–242
equalization, 242
FMOD, 222–223
banks and events, 226–227
event instances, 229–230
installing, 223–224
listeners, in third-person games, 239–240
mixing, 241–242
occlusion, 243–244
positional functionality, adding to SoundEvent,
236–237
reverb, 242
snapshots, 243
SoundEvent class, 230–233
audio systems, creating, 224–226
AudioComponent, creating to associate actors with
sound events, 237–238
AudioComponent::PlayEvent, 238
AudioComponent::StopAllEvents, 238
AudioComponent::Update, 238
AudioSystem::PlayEvent Implementation with Event
IDs, 230–231
AudioSystem::SetListener, 235–236
AudioSystem::UnloadAllBanks, 228
AudioSystem::Update, 231
auto, 475–476
axis, filters, 265
axis-aligned bounding box. See AABB (axis-aligned
bounding box)
B
back buffer, 17
backgrounds, scrolling backgrounds, 51–53
ball collisions
with SegmentCast, 327–328
testing in PhysWorld, 329–331
BallActor, 328
balls
drawing, 20–23
updating position of, 28–30
banks
FMOD, 226–227
loading/unloading, 227–229
basic input processing, 11–13
Basic.frag file, 141
BasicMesh shader, 203–204
BasicMesh.vert shader, 421
Basic.vert file, writing basic shaders, 139–140
beta, 121
BFS (breadth-first search), 100–103
BGSpriteComponent, 51
BGTexture, 51
bidirectional light, 214
bidirectional reflectance distribution function (BRDF),
209
Big-O notation, 470–471
bilinear filtering, 169, 394, 395–397, 400
bilinear interpolation, 396
binary data, 448–449
binary mesh files, loading, 452–454
saving, binary mesh files, 449–452
binary mesh files
loading, 452–454
saving, 449–452
bind pose, 368
Feline Swordsman, 378
blank windows, creating, 10–11
Blended, 339
blitting, 131
bone hierarchy, 367–368
Bone struct, 370
bones, 366
BoneTransform, 369, 371–372
bootstrapping, audio, 222
bounding spheres, 303
bounding volume tests, 310
AABB versus AABB test, 311–312
capsule versus capsule test, 313–314
sphere versus AABB test, 312–313
sphere versus sphere tests, 311
bounding volumes, 302–303
AABB (axis-aligned bounding box), 303–306
capsules, 306–307
convex polygons, 307
OBB (oriented bounding box), 306
spheres, 303
BoxComponent class, 324–325
BRDF (bidirectional reflectance distribution function),
209
breadth-first search (BFS), 100–103
broadphase techniques, 331
buses, 242–243
Button class, 346–350
buttons
controllers, 262–264
mouse input, 256–258
user interfaces, 346–350
Button::SetName, 347
C
C++ Standard Library, 3
calculating
lighting, 407–408
normal vectors, 70–72
points on line segments, 298
view-projection matrix, 200
weighted averages, in bilinear filtering, 396
world transform matrix, 189
camera position, computing, 282
CameraComponent, 277–278
cameras, 34
first-person camera, 276
adding pitch, 278–280
cameras without pitch, 277–278
first-person models, 280–281
movement, 276–277
follow cameras, 281–283
adding springs, 283–285
orbit cameras, 286–288
spline cameras, 289–292
unprojection, 292–294
without pitch, 277–278
capsule contains point tests, 309
capsule versus capsule test, 313–314
capsules, 306–307
Catmull-Rom spline, 289
CCD (continuous collision detection), 321
ChangeState function, 94, 96, 98
channels, 15, 222
character.jump(), 249
characters, jumping, 248–251
choosing 3D model formats, 191–193
CircleComponent subclass, creating, 83–84
circle-versus-circle intersection, 82
class hierarchies, game objects, 34–36
classes, 464–465
dynamic allocation of, 465
states as, 95–98
clip space, transforming
with projection matrix (3D), 197–200
with view matrix (3D), 196–197
from world space, 161–163
closed set, GBFS (greedy best-first search), 106
collections, 470
collision detection, 81
ball collisions
with SegmentCast, 327–328
testing in PhysWorld, 329–331
bounding volumes, 302–303
AABB (axis-aligned bounding box), 303–306
capsules, 306–307
convex polygons, 307
OBB (oriented bounding box), 306
spheres, 303
BoxComponent class, 324–325
CircleComponent subclass, creating, 83–84
circle-versus-circle intersection, 82
dynamic objects, 321–323
geometric types, 298
line segments, 298–301
planes, 301–302
intersection tests. See intersection tests
line segment tests
line segment versus AABB test, 318–320
line segment versus plane test, 314–315
line segment versus sphere test, 315–317
PhysWorld class, 325–327
player collision, against walls, 331–333
CollisionInfo, 326
color buffer, 15–16
color depth, 15
column vectors, 155, 156
combining
equations, 154
rotation, for 3D, 187
transformations, 152–153, 159
vectors, 63–64
compatibility profiles, OpenGL, 129
CompileShader, 142–143
Complete transition, Alarm state, 93
Component class, 440
component declaration, 39–40
component-based game object model, 36–38
ComponentFunc, 442
components
game objects, 36–38
as hierarchy, 38–40
loading, in level files, 439–444
saving, in level files, 446–448
composite, 248
ComputeGlobalInvBindPose, 379
ComputeMatrixPalette, 384, 389
computing
camera position, 282
positions, with Euler integration, 80
const, 464–465
contains point tests, 308
AABB contains point tests, 308
capsule contains point tests, 309
convex polygon contains point (2D) tests, 309–310
sphere contains point tests, 308
ContainsPoint, 347
context, OpenGL, 130
continuous collision detection (CCD), 321
controller input, 261
buttons, 262–264
disabling controllers, 262
enabling single controllers, 261–262
multiple controllers, 269–270
ControllerState, 263, 267
converting
from angles to forward vectors, 66–67
forward vectors, to angles, 67–68
convex polygon contains point (2D) tests, 309–310
convex polygons, 307
coordinate systems
3D coordinate system, 184
left-handed coordinate system, 184
UV coordinates, 395
coordinates, NDC (normalized device coordinates),
132–133
copy constructor, 467–468
core profile, OpenGL, 129
Create, 410
CreateMirrorTarget, 404
CreateMirrorTexture, 402
CreateRotationZ, 160
CreateScale, 160
CreateTranslation, 160
CreateWorldTransform, 160
cross product, 70–72
crosshair texture, drawing, 353
cross-platform libraries, 3
D
dead zones, 265
Death state transition, 92
debug logging, initializing, 225
Debug_Initialize, 225
deferred shading, 407–408, 424–425
G-buffer
creating, 408–411
writing to, 412–414
global lighting, 414–418
point lights, 418–419
drawing, 421–424
deleting, dynamically allocated arrays, 463
delta time, 24–28
dependency injection, 39
depth buffer, 201
depth buffering. See z-buffering
designing, state machines, AI (artificial intelligence),
92–93
Destroy, 411
destructors, 466–467
development environments, setting up, 2
Apple macOS, 2
Microsoft Windows, 2
dialog boxes, 349–352
diamond inheritance, 36
DiffuseColor, 421
digital signal processing (DSP), 241
Dijkstra’s Algorithm, 113–114
directed, 98
directed edges, 100
directional light, 208
loading global properties, 434–435
directions, vectors, 65–66
disabling controllers, 262
distance, vectors, 64–65
Doppler effect, 240–241
Dot function, 70
dot product, 68–70
double buffering, 16–18
Draw function, 52
Draw3DScene, 404–405, 413
DrawComponent, 36
DrawFromGBuffer, 416–417
drawing
2D graphics, 19–21
3D meshes, 195–196
balls, 20–23
crosshair texture, 353
mirror texture, in HUD, 406–407
paddles, 20–23
point lights, 421–424
radar, 357
with skinning vertex attributes, 374–378
sprites, 44–48
textures, 47
triangles, 146–148
transformations, 148–149
walls, 20–23
DrawScreen, 348
DSP (digital signal processing), 241
dynamic memory allocation, 462–463
dynamic memory allocation of classes, 465
dynamic objects, collision detection, 321–323
DynamicArray, 466, 467
E
edges, 98
effects, audio, 241–242
encapsulating buttons, 346
endianness, 451
equalization, 242
equations, combining, 154
ES profiles, OpenGL, 129
Escape key, 256
Euclidean distance heuristic, 105
Euler angles, 185–186
Euler integration, 80
event instances, FMOD, 229–230
event processing, 11–12
EventInstances, 229–230
events, 11
FMOD, 226–227
input devices, 251
exchange formats, 191
extensions
.frag extension, 139
.vert extension, 139
F
falloff function, 233
falloff radius, 208
FBO (framebuffer objects)
creating, 402–404
rendering, 404–405
Feline Swordsman, bind pose, 378
FIFO (first-in, first out), 473
Filter1D, 265–267
Filter2D, 268–269
filtering, 400
analog sticks, 267–269
anisotropic filtering, 399–400
bilinear filtering, 394, 395–397, 400
nearest-neighbor filtering, 395
trilinear filtering, 398, 399, 400
filters, for axis, 265
first-in, first-out (FIFO), 473
first-person camera, 276
adding pitch, 278–280
cameras without pitch, 277–278
first-person models, 280–281
movement, 276–277
first-person models, 280–281
first-person movement, 276–277
FixCollisions, 332–333
flat shading, 211
flipY option, adding to UIScreen::DrawTexture, 406
FMOD, 222–223
audio systems, creating, 224–226
banks and events, 226–227
event instances, 229–230
installing, 223–224
occlusion, 244
positional audio, 234
snapshots, 243
FMOD Studio, 223
buses, 242–243
DSP (digital signal processing), 242
FMOD_VECTOR, 234–235
FMOD_ErrorString, 225
follow cameras, 281–283
adding springs, 283–285
FollowCamera, 282
FollowCamera::Update, 283, 284–285
following paths, AI (artificial intelligence), 114–115
font rendering, 338–340
Font::Load, 338–339
Font::RenderText, 339, 340, 359
For loops, 475–476
force, 79–80
force feedback, 5
formats, 3D model formats, choosing, 191–193
forward rendering, 407–408
forward vectors
converting from angles, 66–67
converting to angles, 67–68
FPS (frames per second), 4
FPS angular speed, 277
FPSActor::FixCollisions, 332–333
FPSCamera, 278
FPSCamera::Update, 279
.frag extension, 139
fragment shaders, 139, 412
frame buffer, 201
frame limiting, 25–26
frame rate, 4
framebuffer, 15
framebuffer objects
creating, 402–404
rendering, 404–405
frames, 4–6
rendering, in OpenGL, 131
frames per second (FPS), 4
front buffer, 17
G
game classes
main function, 10–11
skeleton game class, 6–7
Initialize function, 7–9
RunLoop function, 9–10
Shutdown function, 9
Game Declaration, skeleton game class, 7
game loops, 4
frames, 4–6
integrating game objects into, 40–42
single-threaded game loop, 5
game object models, 40
game objects as class hierarchies, 34–36
game objects as hierarchy with components, 38–40
game objects with components, 36–38
game objects, 34
as class hierarchies, 34–36
with components, 36–38
as hierarchy with components, 38–40
integrating, into game loops, 40–42
types of, 34
game projects
3D graphics, 216
Asteroids, 84–86
audio, 244–245
cameras, 295
collision detection, 333–334
convertingAstroids game to OpenGL, 180
intermediate graphics, 425
level files and binary data, 454–455
moving spaceships, 271–272
skeletal animation, 389–390
sprites, 53–55
tower defense game, 124
user interfaces, 362
game time, 24
game trees, 116–118
AI (artificial intelligence)
alpha-beta pruning, 121–124
incomplete game trees, 120–121
minimax, 118–119
Game::GenerateOutput function, 147
Game::LoadData, 430
Game::LoadShaders, 166
games
adding shaders to, 146
Asteroids, 84–86
Pac-Man game, pseudocode, 6
Pong. See Pong
tic-tac-toe, game trees, 116–117
tower defense game, 124
updating, 23
delta time, 24–28
game time, 24
real time, 24
GameState, 120
gamut, 14
GBFS (greedy best-first search), 105–111
G-buffer, 408
creating, 408–411
writing to, 412–414
GBuffer class, creating, 408–411
GBuffer::Create, 410
GBufferGlobal.frag shader, 415–416, 418
GBufferPointLight.frag shader, 419–421
GBufferWrite.frag, 412
geometric types, 298
line segments, 298–301
planes, 301–302
GetCircle function, 84
GetComponentOfType, 441
GetEventInstance, 232
GetFont, 339
GetGlobalPoseAtTime, 382–383, 387–388
GetInt, JsonHelper, 433
GetKeyState, 254, 255
GetKeyValue, 254, 255
GetLeftTrigger(), 267
GetMesh, 195
GetName function, 95
GetNextPoint function, 115
GetPossibleMoves, 120
GetRightTrigger(), 267, 276
GetScore function, 118, 120
GetType, 441
GetVector3, 434
GL_ARRAY_BUFFER, 136
GL_ELEMENT_ARRAY_BUFFER, 136
GL_LINEAR, 399
GL_RGB, 402
GL_RGB16F, 411
GL_RGB32F, 410–411
GL_STATIC_DRAW, 136
GL_TEXTURE_2D, 171
glAttachShader, 145
glBindBuffer, 136
glBindFramebuffer, 402, 404
glBindTexture, 173
glBlendFunc, 179
glBufferData, 174
glCheckFrameBuffer, 403, 404
glClear, 131, 202
glCreateProgram, 145
glDrawBuffers, 403
glDrawElements, 146–147
glEnable, 179
GLEW, initializing, 130–131
glFramebufferTexture, 403
glGenerateMipmap, 399
glGenFrameBuffers, 402
glGenTextures, 171
glGetError, 130
glGetShaderiv, 144
global lighting, 414–418
properties, loading, 430–431
global pose, 368
global properties
loading, 430–434
saving, 446
globally unique IDs (GUIDs), 229
GLSL, 138
Basic.frag file, 141
Basic.vert file, 139–140
glTexImage2D, 171
glVertexAttribIPointer, 376
glVertexAttribPointer, 376
goal nodes, 100
Gouraud shading, 211
gpmesh files, loading, 194–195
graphics
2D graphics. See 2D graphics
intermediate graphics. See intermediate graphics
graphics debuggers, 414
GraphNode, 100
graphs, AI (artificial intelligence), 98–100
BFS (breadth-first search), 100–103
Grassmann product, 187
greedy best-first search), 105–111
GUIDs (globally unique IDs), 229
H
HandleKeyPress, 344
hash maps, 475
heads-up display. See HUD (heads-up display)
heuristics, 104–105
A*, 111–113
GBFS (greedy best-first search), 105–111
hierarchies, game objects with components, 38–40
high-quality reflections, 401
homogenous coordinates, 158
horizontal field of view (FOV), 199
hot swapping, 261, 270
HUD (heads-up display), 337
drawing, mirror texture, 406–407
UI screen stack, 342–344
UI screens, 340–342
HUD elements, 352
adding
an aiming reticule, 352–354
radar, 354–358
HUD::Draw, 353
HUD::UpdateCrosshair, 353
HUD::UpdateRadar, 356–357
I
IDE (integrated development environment), 2
image files, loading, 43–44
IMG_Load, 43
implementing
2D graphics, 18–21
lighting, 212–216
skeletal animation, 373–374
drawing with skinning vertex attributes, 374–378
skeleton game class, 6–7
Initialize function, 7–9
RunLoop function, 9–10
Shutdown function, 9
state machines, AI (artificial intelligence), 93–95
improving texture quality, 394
inadmissible, heuristics, 104
incomplete game trees, 120–121
index buffers, 133–138
inheritance, diamond inheritance, 36
Initialize function
2D graphics, 19
skeleton game class, 7–9
initializing
debug logging, 225
GLEW, 130–131
OpenGL, 128
input devices, 248
analog sticks, 264–267
controller input, 261
analog sticks. See analog sticks
buttons, 262–264
enabling single controllers, 261–262
multiple controllers, 269–270
events, 251
keyboards, 253–256
mouse, 256
buttons and position, 256–258
relative motion, 258–259
scroll wheels, 260–261
polling, 248
positive and negative edges, 248–251
triggers, 264
input mappings, 270–271
InputComponent class, creating, 76–79
InputSystem, architecture, 251–253
InputSystem::Filter2D, 268–269
InputSystem::Initialize, 263
installing
FMOD, 223–224
Xcode, 2
instantaneous tests, 321
integrated development environment (IDE), 2
integrating game objects into game loops, 40–42
intermediate graphics
anisotropic filtering, 399–400
deferred shading, 407–408
drawing mirror texture in HUD, 406–407
global lighting, 414–418
improving texture quality, 394
mipmapping, 397–399
rendering to textures, 400–401
creating framebuffer objects, 402–404
creating textures, 401–402
texture sampling, 395–397
interpolate, 186
Intersect function, 83, 315, 319–320
intersection, circle-versus-circle intersection, 82
intersection tests, 307
bounding volume tests, 310
AABB versus AABB test, 311–312
capsule versus capsule test, 313–314
sphere versus AABB test, 312–313
sphere versus sphere tests, 311
contains point tests, 308
AABB contains point tests, 308
capsule contains point tests, 309
convex polygon contains point (2D) tests, 309–310
sphere contains point tests, 308
inverse bind pose matrix, 371
IsCompiled, 143–144
IsTerminal, 120
IsValidProgram, 145
iterative deepening, 124
iterators, 475–476
J
joints, 366
JSON (JavaScript Object Notation), 191, 448–449
level files
loading actors, 435–439
loading components, 439–444
loading global properties, 430–434
saving, 444–446
saving actors and components, 446–448
saving global properties, 446
JsonHelper, 433, 444
JsonHelper::GetInt, 433
jumping characters, spacebar, 248–251
K
keyboards, 248
input, 253–256
KeyboardState, 254
L
last-in, first-out (LIFO), 474
left-handed coordinate system, 184
length, vectors, 64–65
Length () function, 65
LengthSquared() function, 65
level files
loading, 430
actors, 435–439
components, 439–444
global properties, 430–434
saving, 444–446
actors, 446–448
components, 446–448
global properties, 446
LevelLoader, 437
LevelLoader class, 431
LevelLoader::LoadActors, 438–439
LevelLoader::LoadComponents, 443–444
LevelLoader::LoadGlobalProperties, 434–435
LevelLoader::LoadJSON, 431–433
LevelLoader::SaveActors, 447–448
LevelLoader:SaveLevel, 445
LevelLoader::SaveLevel, 445
libraries
3D positional audio library, 234–235
C++ Standard Library, 3
OpenGL. See OpenGL
SDL (Simple DirectMedia Layer) library, 4
SDL TTF library, 338
LIFO (last-in, first-out), 474
lighting, 206
ambient light, 207–208
loading global properties, 434–435
bidirectional light, 214
deferred shading. See deferred shading
directional light, 208
global lighting, 414–418
implementing, 212–216
Phong reflection model, 209–211
point lights, 208
adding, 418–419
drawing, 421–424
spotlight, 209
vertex attributes, 206–207
lighting calculations, 407–408
line segment tests, 314
line segment versus AABB test, 318–320
line segment versus plane test, 314–315
line segment versus sphere test, 315–317
line segment versus AABB test, 318–320
line segment versus plane test, 314–315
line segment versus sphere test, 315–317
line segments, 298–301
linear mechanics, 79–80
LineSegment::MinDistSq, 300–301, 309
linked lists, 472
listeners, 233
setting up for 3D positional audio, 234–236
in third-person games, 239–240
listings
AABB::Rotate Implementation, 305–306
Abbreviated Render Declaration, 195–196
Actor Declaration, 38–39
Actor::ComputeWorldTransform Implementation, 161
Actor::LoadProperties Function, 436–437
Actor::RotateToNewForward, 327–328
Actors with Components in JSON (Excerpt from the
Full File), 440
Actor::SaveProperties Implementation, 447
Adding a flipY Option to UIScreen::DrawTexture, 406
AIComponent::ChangeState Implementation, 96
AlphaBetaDecide Implementation, 122
AlphaBetaMin Implementation, 123
AlphaBetaMax Implementation, 123
Animation Declaration, 381–382
AnimSpriteComponent Declaration, 49
AnimSpriteComponent::Update Implementation, 50
Asteroid Constructor, 74–75
AudioComponent Declaration, 237
AudioSystem::LoadBank Implementation, 227–228
AudioSystem::PlayEvent Implementation with Event
IDs, 230–231
AudioSystem::SetListener Implementation, 235–236
AudioSystem::Update Implementation with Event IDs,
231
Basic InputSystem Declarations, 252
Basic.frag Code, 141
Basic.vert Code, 140
The Beginning of a Skeleton Data, 370
The Beginning of an Animation Data, 380–381
BGSpriteComponent Declaration, 51
BoxComponent Declaration, 324
BoxComponent::OnUpdateWorldTransform
Implementation, 325
Breadth-First Search, 102
Button Declaration, 346
Circle Class with const Member, 464
CircleComponent Declaration, 83
CircleComponent Intersection, 83
Component Declaration, 39–40
ComputeGlobalInvBindPose, 379–380
ComputeMatrixPalette, 384
Constructing a Plane from Three Points, 301
ConvexPolygon::Contains Implementation, 310
Creating a Texture for Rendering, 401–402
Creating the Mirror Framebuffer, 403–404
Cube.gpmesh, 192
Current Implementation of SpriteComponent::Draw,
166–167
Declaring Vertex Attributes in the VertexArray
Constructor, 375–376
directional light, loading global properties, 434–435
Drawing MeshComponents in Renderer::Draw, 205
Drawing Point Lights in Renderer::DrawFromGBuffer,
422
English.gptext Text Map File, 359
Filter1D Implementation, 266
Final Version of GetGlobalPoseAtTime, 387–388
First Version of GetGlobalPoseAtTime, 382–383
FollowCamera::Update Implementation (with Spring),
284–285
Font Declaration, 338
Font::Load Implementation, 339
Font::RenderText Implementation, 340
FPS Angular Speed Calculation from the Mouse, 277
FPSActor::FixCollisions, 332–333
FPSCamera::Update Implementation (with Pitch
Added), 279
FPSCamera::Update Implementation (Without Pitch),
278
Game Declaration, 7
Game::GenerateOutput Attempting to Draw Sprites,
147
Game::ProcessInput Implementation, 13
Game::UpdateGame Implementation, 26
Game::UpdateGame Updating Actors, 41–42
GBuffer Declaration, 409
GBuffer::Create Implementation, 410
GBufferGlobal.frag Shader, 415–416
GBufferPointLight.frag Main Function, 420–421
GBufferWrite.frag shader, 412
Greedy Best-First Search, 110–111
HUD::UpdateCrosshair, 353
HUD::UpdateRadar Implementation, 356–357
Initial AudioSystem Declaration, 224
Initial ControllerState, 263
Initial MouseState Declaration, 258
Initial Shader Declaration, 142
Initial UIScreen Declaration, 341–342
InputComponent Declaration, 77
InputComponent::ProcessInput Implementation,
77–78
InputSystem::Filter2D, 268–269
InputSystem::ProcessEvent Implementation for the
Scroll Wheel, 260
JsonHelper::GetInt Implementation, 433
KeyboardState Declaration, 254
KeyboardState::GetKeyState, 255
Level with Actors (Level1.gplevel), 435–436
Level with Global Lighting Properties, 430–431
LevelLoader::LoadActors Implementation, 438–439
LevelLoader::LoadComponents Implementation,
443–444
LevelLoader::LoadGlobalProperties, 434–435
LevelLoader::LoadJSON, 432
LevelLoader::SaveActors Implementation, 447–448
LevelLoader:SaveLevel Implementation, 445
Line Segment Versus AABB Helper Function, 319
Line Segment Versus AABB Intersection, 320
Line Segment Versus Plane Intersection, 315
Line Segment Versus Sphere Intersection, 317
LineSegment::MinDistSq Implementation, 300–301
Loading the G-buffer Global Lighting Shader, 416
Loop over the Adjacent Nodes in an A* Search, 112–113
main Implementation, 10
MaxPlayer and MinPlayer Functions, 118–119
MaxPlayerLimit Implementation, 120
Mesh Declaration, 194
MeshBinHeader Struct, 449–450
MeshComponent Declaration, 204
MeshComponent::Draw Implementation, 205
MeshComponent::LoadProperties Implementation,
442
Mesh::LoadBinary Outline, 452–453
Mesh::SaveBinary Implementation, 450–451
MoveComponent Declaration, 73
MoveComponent::Update Implementation, 74
MoveComponent::Update Implementation with
Quaternions, 190
OrbitCamera::Update Implementation, 287–288
Pac-Man Game Loop Pseudocode, 6
Phong.frag Lighting Uniforms, 212
Phong.frag Main Function, 215
Phong.vert Main Function, 214
PhysWorld::SegmentCast, 326–327
PhysWorld::TestPairWise, 329
PhysWorld::TestSweepAndPrune, 330–331
PointLightComponent Declaration, 419
PointLightComponent::Draw Implementation,
423–424
Quaternion Functions of Note, 189
Renderer::Draw Updated to Render Both Mirror and
Default Passes, 405
Renderer::Draw3DScene Helper Function, 404–405
Renderer::DrawFromGBuffer Implementation, 417
Renderer::GetScreenDirection Implementation, 294
Renderer::Unproject Implementation, 293
Requesting OpenGL Attributes, 129
Shader::CompileShader Implementation, 143
Shader::IsCompiled Implementation, 144
Shader::Load Implementation, 144–145
Shader::SetMatrixUniform, 165
Ship Declaration, 54
SkeletalMeshComponent Declaration, 377
Skeleton Declaration, 378–379
Skeleton PhysWorld Declaration, 325–326
Skinned.vert Main Function, 385–386
SoundEvent Declaration, 232
SoundEvent's Is3D and Set3DAttributes
Implementation, 236–237
SplineCamera::Update, 291–292
Spline::Compute Implementation, 290
SpriteComponent Declaration, 45–46
SpriteComponent::Draw Implementation, 48
Sprite.frag Implementation, 177
Sprite.vert Implementation, 176
Swept-Sphere Intersection, 323
Texture Declaration, 170–171
Texture::Load Implementation, 172–173
Transform.vert Vertex Shader, 164
UIScreen::ProcessInput, 348–349
Updating Position and Rotation of the First-Person
Model, 280
Using SegmentCast for Ball Movement, 328
VertexArray Declaration, 135
Z-Buffering Pseudocode, 202
Load function, 144–145, 338–339, 379
LoadActors, 438
LoadBank, 227–228
LoadBinary, 452–453
LoadComponents, 443–444
LoadData function, 44
LoadGlobalProperties, 434
loading
3D models, 190
animation data, skeletal animation, 380–385
banks, 227–229
binary mesh files, 452–454
G-buffer global lighting shader, 416
global properties, 430–434
gpmesh files, 194–195
image files, 43–44
level files, 430
actors, 435–439
components, 439–444
global properties, 430–434
shaders
adding shaders to games, 146
CompileShader, 142–143
IsCompiled, 143–144
IsValidProgram, 145
Load, 144–145
OpenGL, 141–142
SetActive, 145
Unload, 146
skeletons, 378–380
textures, 170–173
LoadJSON, 431–433
LoadLevel function, 431
LoadProperties, 436, 441
LoadShaders, 146, 415
LoadText, 359
local lighting model, 209
local pose, 368
localization, 358, 361
text maps, adding, 359–361
Unicode, 358–359
look-at matrix, 196
low-pass filter, occlusion, 243
low-quality reflections, 401
M
Mac, installing FMOD, 223
macros
offsetof macro, 175
SDL_BUTTON macro, 257
main function, 10–11
Manhattan distance heuristics, 104–105
maps, 474–475
hash maps, 475
mass, 79–80
Math::Clamp, 266
Math::Cos, 67
Math.h library
Dot function, 70
Normalize(), 66
Math::Max, 313
Math::NearZero function, 73–74
Math::Sin, 67
Math::ToDegrees, 47
matrices
inverse bind pose matrix, 371
look-at matrix, 196
transformations, matrix multiplication, 154–155
transformations and, 154
transforming points, 155–157
view matrix (3D), 196–197
view-projection matrix, calculating, 200
matrix multiplication, 154–155, 156
matrix palettes, 385
max players, 116
maxAngularSpeed, 277
maxMouseSpeed, 277
MaxPlayer, 119
MaxPlayerLimit function, 120
mCurrFrame, 50
membership tests, 106
memory allocation, 462–463
memory usage, reducing, 375
Mesh, 194–195
MeshBinHeader, 449–450
MeshBinHeader*, 451
MeshComponent, 439
3D, 204–206
MeshComponent::GetType, 441
MeshComponent::LoadProperties, 442
meshes, drawing 3D meshes, 195–196
Mesh::Load, 195, 452
Mesh::LoadBinary, 452–453
Mesh::SaveBinary, 450–451
Microsoft Visual Studio Community 2017, 2
Microsoft Windows
API (application programming interface), 3
setting up development environments, 2
min players, 116
MinDistSq, 300–301, 313
minimax, 118–119
MinimaxDecide, 119
mirror framebuffer, creating, 403–404
mirror texture, drawing in HUD, 406–407
MirrorCamera, 405
mirrors, rendering framebuffer objects, 404–405
mixing audio, 241–242
monolithic hierarchy, 36
mouse, 256
buttons and position, 256–258
FPS angular speed, 277
relative motion, 258–259
scroll wheels, 260–261
MouseState, 257–258
MoveComponent, 190, 276
creating, 73–75
movement, 270
first-person camera, 276–277
InputComponent class, creating, 76–79
MoveComponent class, creating, 73–75
multiple controllers, 269–270
multiplying vectors, 62–63
N
narrowphase techniques, 331
nav mesh, 115–116
NavComponent, 114
navigation meshes, 115–116
NDC (normalized device coordinates), 132–133
nearest-neighbor filtering, 395
nearest-neighbor mipmapping, 398
negative edges, 248–251
Newtonian physics, 79
linear mechanics, 79–80
node adoption, 111
nodes, 98
goal nodes, 100
parent nodes, 100
path nodes, 115
start nodes, 100
NodeToPointerMap, 101
non-player characters (NPCs), 115
normal vectors, calculating, 70–72
normalization, unit vectors, 65–66
Normalize(), 66
normalized device coordinates (NDC), 132–133
NPCs (non-player characters), 115
numeric integration, 80–81
O
OBB (oriented bounding box), 306
object space, 149–150
occlusion, 243–244
offsetof macro, 175
OnClick, 347, 349
OnEnter, 95
OnExit, 95
OnUpdateWorldTransform, 324–325
open set, GBFS (greedy best-first search), 106
OpenGL
alpha blending, 178–180
anisotropic filtering, 399
context, 130
initializing, 128
loading shaders, 141–142
adding shaders to games, 146
CompileShader, 142–143
IsCompiled, 143–144
IsValidProgram, 145
Load, 144–145
SetActive, 145
Unload, 146
mipmapping, 399
rendering frames, 131
requesting attributes, 129
setting up OpenGL window, 128–129
texture mapping, 167–170
updating vertex format, 173–175
UBOs (uniform buffer objects), 165
vertex array object, 134–135
writing basic shaders, 139–141
operator overloading, 468–470
operator+, 469
optimizing A*, 113
orbit cameras, 286–288
OrbitCamera class, 287
OrbitCamera::Update, 287–288
oriented bounding box (OBB), 306
orthographic projection, 197
outMap, 101
overdraw, 200
overloading operators, 468–470
P
Pac-Man game
partial class hierarchy, 36
pseudocode, 6
state machine behaviors, 92
paddles
drawing, 20–23
updating position of, 26–28
painter’s algorithm, issues with in 3D, 200–201
parallax effect, 53
parameters, sound events, 223
parent nodes, 100
pass by value, 459
path nodes, 115
path-cost component, 111
pathfinding, AI (artificial intelligence), 98
BFS (breadth-first search), 100–103
Dijkstra’s Algorithm, 113–114
following paths, 114–115
heuristics, 104–105
navigation meshes, 115–116
path nodes, 115
Patrol state, 92, 94–95
pause menu, 344–345
perspective divide, 199
perspective projection, 197
phantom inputs, 265
Phong reflection model, 209–211
Phong shading, 211
physics, Newtonian physics, 79
linear mechanics, 79–80
PhysWorld class, 325–327
testing ball collisions, 329–331
PhysWorld::TestPairWise, 329
pitch, 185
pixels, 14
PlaneActor, 331, 333
planes, 301–302
platform specific libraries, 3
player collision, against walls, 331–333
PlayEvent, 229, 230, 238
playing event instances, 229–230
PNG files, 43
point light fragment shaders, 419–421
point lights, 208
adding, 418–419
drawing, 421–424
pointers, 459–460
PointLightComponent class, adding, 419
PointLightComponent::Draw, 423–424
points, transforming, with matrices, 155–157
polling, 248
polygons, 131–132
Pong
drawing walls, balls, and paddles, 20–23
updating
ball’s position, 28–30
paddle’s position, 26–28
poses, skeletons and, 367–370
position of
balls, updating, 28–30
mouse, 256–258
paddles, updating, 26–28
positional functionality, adding to SoundEvent, 236–237
positions, computing with Euler integration, 80
positive edges, 248–251
PosNormTex format, 191–192
PrepareForUpdate, 255, 264
printf function, 8
processing, basic input processing, 11–13
ProcessInput, 11–12, 13, 251, 343
InputComponent class, 76
ProcessKeyboard function, 54
projection matrix (3D), transforming, clip space,
197–200
properties, loading global properties, 430–434
Q
quality, improving texture quality, 394
quaternions, 186–187
in code, 188–189
combining, rotations, 187
MoveComponent, 190
quaternion-to-rotation matrix, 188
rotating vectors, 188
spherical linear interpolation (Slerp), 188
quaternion-to-rotation matrix, 188
queues, 473–474
quit dialog box, 351
R
radar
adding, 354–358
drawing, 357
Random function, 75
range-based For loops, 475–476
RapidJSON, 194, 430, 449
saving level files, 444–446
raster graphics, 14
real time, 24
red, green, blue (RGB), 14
reducing memory usage, 375
references, 458–459, 464–465
reflections, 401
refresh rate, 16
RegisterState function, 96
relative motion, mouse input, 258–259
RemoteTarget, 352
RemoveActor, 41
Renderer, abbreviated declaration, 195–196
renderer, 19
Renderer::Draw, 343, 377, 405
Renderer::DrawFromGBuffer, 417
Renderer::GetScreenDirection, 294
Renderer::Initialize, 404, 411
Renderer::Shutdown, 404, 411
rendering
framebuffer objects, 404–405
frames, OpenGL, 131
to textures, 400–401
creating framebuffer objects, 402–404
creating textures, 401–402
RenderText, 338, 340, 347
repositories, 3
requesting OpenGL, attributes, 129
resolution, 14
supporting multiple resolutions, 361–362
reticule, adding, aiming reticule, 352–354
reverb, 242
RGB (red, green, and blue), 14
RGBA, 14–15
roll, 185
RotateToNewForward, 327–328
rotating vectors, quaternions, 188
rotation, 152
combining for 3D, 187
Euler angles, 3D, 185–186
quaternions, 3D, 186–187
quaternion-to-rotation matrix, 188
rotation matrix, 157–158
row vectors, 155
RunLoop function, 9–10
S
sample data, 226–227
SaveActors, 447–448
SaveBinary, 450–451
SaveComponents, 448
SaveLevel, 445, 446
saving
binary mesh files, 449–452
level files, 444–446
actors, 446–448
components, 446–448
global properties, 446
scalar multiplication, 62–63
scale, 151–152
scale matrix, 157
scaling vectors, 62–63
screen tearing, 16–17
scroll wheels, mouse input, 260–261
scrolling backgrounds, 51–53
SDL (Simple DirectMedia Layer) library, 4
input devices, 251
SDL 2D coordinate system, mouse position, 257
SDL controller axis constants, 264
SDL Image, 43
SDL image file formats, 43
SDL subsystem flags, 8
SDL TTF library, 338
SDL_BUTTON macro, 257
SDL_CreateRenderer, 19
SDL_CreateTextureFromSurface, 43
SDL_CreateWindow function, 8, 128
SDL_DestroyRenderer, 19
SDL_Event, 12
SDL_GameControllerAddMappingsFromFile, 262
SDL_GameControllerGetAxis, 267
SDL_GameControllerGetButton, 264
SDL_GameControllerOpen, 261
SDL_GetKeyboardState, 13, 248, 253
SDL_GetMouseState, 256–258
SDL_GetRelativeMouseState, 259, 277
SDL_GetTicks, 25
SDL_INIT_AUDIO, 8
SDL_INIT_GAMECONTROLLER, 8
SDL_INIT_HAPTIC, 8
SDL_INIT_VIDEO, 8
SDL_Log function, 8
SDL_PollEvent function, 11–12
SDL_QueryTexture, 46
SDL_QUIT, 12
SDL_Rect, 21
SDL_RenderClear, 20
SDL_RenderCopy, 47
SDL_RenderCopyEx, 47
SDL_Renderer, 128
SDL_RenderFillRect, 20, 22
SDL_RenderPresent, 20
SDL_SCANCODE, 13
SDL_SetRenderDrawColor, 20
SDL_ShowCursor, 256
SDL_Surface, 43, 339
SDL_Texture, 44
SDL_WINDOW_FULLSCREEN, 9
SDL_WINDOW_FULLSCREEN_DESKTOP, 9
SDL_WINDOW_OPENGL, 9
SDL_WINDOW_RESIZABLE, 9
SegmentCast, 326
ball collisions, 327–328
Set3DAttributes, 236–237
set3DSettings, 241
SetActive, 137, 145, 416
SetAnimTextures function, 50
SetIntUniform, 415–416
SetListener, 235, 236
SetMatrixUniform, 164, 165, 386
setMatrixUniform, 415–416
SetName, 347
SetTexture function, 46
SetTexturesActive, 416–417
setting up
development environments, 2
Apple macOS, 2
Microsoft Windows, 2
listeners, for 3D positional audio, 234–236
OpenGL window, 128–129
SetViewMatrix, 277–278
SetVolume, 242
shader programs, 145
shaders, 138
adding shaders to games, 146
BasicMesh shader, 203–204
BasicMesh.vert shader, 421
fragment shaders, 139
loading, 141–142
CompileShader, 142–143
IsCompiled, 143–144
IsValidProgram, 145
Load, 144–145
SetActive, 145
Unload, 146
point light fragment shaders, 419–421
skinning vertex shaders, 385–387
updating, 175–176
Sprite.frag shader, 176–178
to use transform matrices, 163–167
vertex shaders, 138–139
writing basic shaders, 139
Basic.frag file, 141
Basic.vert file, 139–140
Ship::UpdateActor function, 54
Shutdown function, 2D graphics, 19
Shutdown function, 9
Simple DirectMedia Layer. See SDL
Simple OpenGL Image Library (SOIL), 170
SimpleViewProjection matrix, 163
single controllers, enabling, 261–262
single-threaded game loop, 5
skeletal animation, 365–367
animation data, 371–372
drawing with skinning vertex attributes, 374–378
implementing, 373–374
inverse bind pose matrix, 371
loading animation data, 380–385
loading skeletons, 378–380
skeletons and poses, 367–370
skinning, 372–373
skinning vertex shader, 385–387
updating, 386–389
skeletal hierarchy, 367–368
SkeletalMeshComponent, 376–377, 380, 383–384, 441
SkeletalMeshComponent::Draw, 386
SkeletalMeshComponent::Update, 388–389
skeleton game class, 6–7
Initialize function, 7–9
RunLoop function, 9–10
Shutdown function, 9
skeletons
loading, 378–380
poses and, 367–370
skinned animation. See skeletal animation
Skinned.vert, 374, 385–386
skinning, 372–373
skinning vertex attributes, drawing with, 374–378
skinning vertex shaders, 385–387
Slerp (spherical linear interpolation), 188
smoothstep function, 421
SnapToIdeal, 285
SOIL (Simple OpenGL Image Library), 170, 172
sound effects, third-person games, 239–240
sound events, 223
sound occlusion, 243–244
SoundEvent class, 230–233
adding positional functionality to, 236–237
SoundEvent::IsValid, 233
sounds. See audio
source code, 3
source control systems, 3
space
clip space. See clip space, 161–163
object space, 149–150
world space, 150
transforming, 150–151
sphere contains point tests, 308
sphere versus AABB test, 312–313
sphere versus sphere tests, 311
spheres, 303
spherical linear interpolation (Slerp), 188
spline cameras, 289–292
SplineCamera class, 290
SplineCamera::Update, 291–292
Spline::Compute, 290
splines, Catmull-Rom spline, 289
spotlight, 209
springs, adding to follow cameras, 283–285
SpriteComponent, 45–46, 166
SpriteComponent::Draw, 47–48, 166–167, 173
Sprite.frag shader, 176–178
sprites, 42, 148
animating, 48–50
drawing, 44–48
texture-mapped sprites, 178
Sprite.vert shader, 175–176
stacks, 474
start nodes, 100
state machine behaviors, AI (artificial intelligence), 92
state machines
designing, 92–93
implementing, 93–95
transitions, 94
states, as classes, 95–98
static objects, 34
std::function, 346, 437, 442
std::map, 475
std::pair, 476
std::queue, 474
std::string, 359, 437
std::unordered_map, 475
std::vector, 319, 370
strafe speed, 276
streaming data, 226–227
subtraction, vectors, 61–62
supporting multiple resolutions, 361–362
surround sound, 234
Swap, 462
sweep-and-prune, 331
swept-sphere intersection, 323
swizzle, 213–214
T
TargetActor, 435
TargetComponent, 353, 354
testing ball collisions in PhysWorld, 329–331
TestPairWise, 329
tests
contains point tests. See contains point tests
instantaneous tests, 321
intersection tests. See intersection tests
line segment tests. See line segment tests
TestSweepAndPrune, 330–331
texel density, 394
texels, 394
text, localization, 361
text maps, adding, 359–361
texture coordinates, 168, 395
texture mapping, 167–170
updating, vertex format, 173–175
texture quality, improving, 394
texture sampling, 395–397
Texture::Load, 172–173
texture-mapped sprites, 178
textures
drawing, 47
loading, 170–173
rendering to, 400–401
creating framebuffer objects, 402–404
creating textures, 401–402
Texture::SetActive, 173
Texture::Unload, 173
third-person games, listeners, 239–240
tic-tac-toe, game trees, 116–117
ToMatrix, 369
tower defense game, 124
tracking
event instances, 230
loaded banks and events, 227
tracks, 380
transform matrices
3D, 184–185
updating shaders, 163–167
transformation matrix, 155
transformations, 148–149
combining, 152–153, 159
matrices, 154
matrix multiplication, 154–155
TransformComponent, 37–38
transforming
clip space
projection matrix (3D), 197–200
view matrix (3D), 196–197
points, with matrices, 155–157
from world space, to clip space, 161–163
world space, 150–151, 157
rotation, 152
rotation matrix, 157–158
scale, 151–152
scale matrix, 157
translation, 151
translation matrices, 158–159
Transform.vert, 163
TransformWithPerspDiv, 293
transitions, 92
state machines, 94
translation, 151
translation matrices, 158–159
triangles, 131–132
drawing, 146–148
transformations, 148–149
fragment shaders, 139
normalized device coordinates (NDC), 132–133
texture mapping, 167–170
vertex and index buffers, 133–138
vertex shaders, 138–139
triggers, 34, 264
trilinear filtering, 398, 399, 400
TTF_OpenFont, 338
TTF_Quit, 338
TTF_RenderText_Blended, 339
turning on anisotropic filtering, 400
TurnTo function, 114
types of game objects, 34
U
UBOs (uniform buffer objects), 165
UI screen stack, 342–344
UI screens, 340–342
UIScreen, 340–342
UIScreen::DrawScreen, 348
UIScreen::DrawTexture, 406
UIScreen::ProcessInput, 348–349
undirected, 98
Unicode, 358–359
uniform buffer objects (UBOs), 165
uniform cost search, 114
unit quaternions, 186
unit vectors, 65–66
Unload function, 146
unloading banks, 227–229
Unproject, 294
unprojection, 292–294
unweighted graphs, 98–99
Update function, 39, 50, 388–389
UpdateActor, 39
UpdateComponents, 39
UpdateCrosshair, 353
UpdateGame, 25, 26, 41–42
UpdateMinMax, 304, 305
updating
animations, skeletal animation, 386–389
balls, position of, 28–30
games, 23
delta time, 24–28
game time, 24
real time, 24
paddles, position of, 26–28
shaders
Sprite.frag shader, 176–178
Sprite.vert shader, 175–176
to use transform matrices, 163–167
vertex attributes, 193–194
vertex format, 173–175
user interfaces (UI)
buttons, 346–350
dialog boxes, 349–352
font rendering, 338–340
localization, 358, 361
text maps, 359–361
Unicode, 358–359
pause menu, 344–345
supporting multiple resolutions, 361–362
UI screen stack, 342–344
UI screens, 340–342
UTF-8, 358
UV coordinates, 395
V
VAO (vertex array object), 150
variable time steps, 80–81
VecToFMOD, 234–235
vector addition, 63–64
vector subtraction, 61–62
vectors, 59–61, 471–472
column vectors, 155, 156
combining, 63–64
converting forward vectors to angles, 67–68
converting from angles to forward vectors, 66–67
determining angles between, dot product, 68–70
determining directions, 65–66
determining distance, length, 64–65
normal vectors, calculating, 70–72
rotating, with quaternions, 188
row vectors, 155
scaling, 62–63
velocity, 79–80
.vert extension, 139
vertex array object (VAO), 150
vertex, creating, 163
vertex attributes
lighting, 206–207
updating, 193–194
vertex buffers, 133–138
vertex format, updating, 173–175
vertex normal, 206
vertex shaders, 138–139
VertexArray class, 135, 452
VertexArray constructor, 375–376
vertical synchronization (vsync), 17
vertices, 98
view matrix (3D), transforming, clip space, 196–197
view-projection matrix, 162
calculating, 200
Visual Studio, 2
vsync (vertical synchronization), 17
W
walls
drawing, 20–23
player collision against, 331–333
waypoint graphs, 115
weighted graphs, 99
weights, edges, 98–99
Window Creation flags, 9
Windows, FMOD, installing, 223
windows, creating blank windows, 10–11
world space, 150
combining transformations, 152–153
transforming, 150–151, 157
to clip space, 161–163
rotation, 152
rotation matrix, 157–158
scale, 151–152
scale matrix, 157
translation, 151
translation matrices, 158–159
world transform matrix, calculating, 189
world transforms, adding to actors, 159–161
writing
basic shaders, 139
Basic.frag file, 141
Basic.vert file, 139–140
to G-buffer, 412–414
X
Xcode, installing, 2
.xyz syntax, 213–214
Y
yaw, 185
Z
z-buffer, 201
z-buffering, 201–203
Code Snippets