Unity 5 Game Optimization - Sample Chapter
Unity 5 Game Optimization - Sample Chapter
Unity 5 Game Optimization - Sample Chapter
"Community
Experience
Distilled"
Chris Dickinson
C o m m u n i t y
D i s t i l l e d
E x p e r i e n c e
Chris Dickinson
mathematics, and video games. He received his master's degree in physics with
electronics from the University of Leeds in 2005, and immediately traveled to
California to work on scientific research in the heart of Silicon Valley. Finding
that career path unsuitable, he began working in the software industry.
Over the last decade, he has made a career in software development, becoming
a senior software developer. Chris has primarily worked in software automation
and internal test tool development, but his passion for video games never fully
faded. In 2010, he took the path of discovering the secrets of game development
and 3D graphics by completing a second degreea bachelor's degree in game and
simulation programming. He authored a tutorial book on game physics (Learning
Game Physics with Bullet Physics and OpenGL by Packt Publishing). He continues
to work in software development, creating independent game projects in his spare
time with tools such as Unity 3D.
Preface
User experience is a critical component of any game. User experience includes not
only our game's story and its gameplay, but also how smoothly the graphics run,
how reliably it connects to multiplayer servers, how responsive it is to user input,
and even how large the final application file size is due to the prevalence of app
stores and cloud downloads. The barrier to entering game development has been
lowered considerably thanks to the release of cheap, AAA-industry-level game
development tools such as Unity. However, the features and quality of the final
product that our players expect us to provide is increasing with every passing day.
We should expect that every facet of our game can and will be scrutinized by players
and critics alike.
The goals of performance optimization are deeply entwined with user experience.
Poorly optimized games can result in low frame rates, freezes, crashes, input lag,
long loading times, inconsistent and jittery runtime behavior, physics engine
breakdowns, and even excessively high battery power consumption (particularly
important in this era of mobile devices). Having just one of these issues can be a
game developer's worst nightmare as reviews will tend to focus on the one thing
that we did badly, in spite of all the things that we did well.
Performance is all about making the best use of the available resources, which
includes the CPU resources such as CPU cycles and main memory space, Graphics
Processing Unit (GPU) resources such as memory space (VRAM) and memory
bandwidth, and so on. But optimization also means making sure that no single
resource causes a bottleneck at an inappropriate time, and that the highest priority
tasks get taken care of first. Even small, intermittent hiccups and sluggishness in
performance can pull the player out of the experience, breaking immersion and
limiting our potential to create the experience we intended.
Preface
It is also important to decide when to take a step back and stop making performance
enhancements. In a world with infinite time and resources, there would always be
another way to make it better, faster, or easier to maintain. There must be a point
during development where we decide that the product has reached acceptable levels
of quality. If not, we risk dooming ourselves to implementing further changes that
result in little or no tangible benefit.
The best way to decide if a performance issue is worth fixing is to answer the
question "will the user notice it?". If the answer to this questions is "no", then
performance optimization would be a wasted effort. There is an old saying in
software development:
Preface
Chapter 3, The Benefits of Batching, explores Unity's Dynamic and Static Batching
systems to ease the burden on the rendering system.
Chapter 4, Kickstart Your Art, helps you understand the underlying technology behind
our art assets and learn how to avoid common pitfalls with importing, compression,
and encoding.
Chapter 5, Faster Physics, is about investigating the nuances of Unity's physics system
for both 3D and 2D games, and how to properly organize our physics objects for
improved performance.
Chapter 6, Dynamic Graphics, provides an in-depth exploration of the rendering
system, and how to improve applications that suffer rendering bottlenecks in the
GPU, or CPU, and specialized techniques for mobile devices.
Chapter 7, Masterful Memory Management, examines the inner workings of the
Unity Engine, the Mono Framework, and how memory is managed within
these components to protect our application from heap allocations and runtime
garbage collection.
Chapter 8, Tactical Tips and Tricks, deals with a multitude of useful techniques used
by professionals to improve project workflow and scene management.
Detecting Performance
Issues
Performance evaluation for most software products is a very scientific process:
determine the maximum supported performance metrics (number of concurrent
users, maximum allowed memory usage, CPU usage, and so on); perform load
testing against the application in scenarios that try to simulate real-world behavior;
gather instrumentation data from test cases; analyze the data for performance
bottlenecks; complete a root-cause analysis; make changes in the configuration
or application code to fix the issue; and repeat.
Just because game development is a very artistic process does not mean it should
not be treated in equally objective and scientific ways. Our game should have a
target audience in mind, who can tell us the hardware limitations our game might be
under. We can perform runtime testing of our application, gather data from multiple
components (CPU, GPU, memory, physics, rendering, and so on), and compare
them against the desired metrics. We can use this data to identify bottlenecks in our
application, perform additional instrumentation to determine the root cause of the
issue, and approach the problem from a variety of angles.
To give us the tools and knowledge to complete this process, this chapter will
introduce a variety of methods that we will use throughout the book to determine
whether we have a performance problem, and where the root cause of the
performance issue can be found. These skills will give us the techniques we need
to detect, analyze, and prove that performance issues are plaguing our Unity
application, and where we should begin to make changes. In doing so, you will
prepare yourselves for the remaining chapters where you will learn what can be
done about the problems you're facing.
[1]
We will begin with an exploration of the Unity Profiler and its myriad of features. We
will then explore a handful of scripting techniques to narrow-down our search for the
elusive bottleneck and conclude with some tips on making the most of both techniques.
Rendering statistics
[2]
Chapter 1
Remote instances of the application on an iOS device (the iPad tablet or the
iPhone device)
We will briefly cover the requirements for setting up the Profiler in each of
these contexts.
[3]
Editor profiling
Profiling the Editor itself, such as profiling custom Editor Scripts, can be enabled with
the Profile Editor option in the Profiler window as shown in the following screenshot.
Note that this requires the Active Profiler option to be configured to Editor.
[4]
Chapter 1
You should now see reporting data collecting in the Profiler window.
[5]
1. Ensure that the Use Development Mode and Autoconnect Profiler flags are
enabled when the application is built.
2. Connect both the iOS device and Mac to a local or ADHOC WiFi network.
3. Attach the iOS device to the Mac via the USB or the Lightning cable.
4. Begin building the application with the Build & Run option as normal.
5. Open the Profiler window in the Unity Editor and select the device under
Active Profiler.
You should now see the iOS device's profiling data gathering in the Profiler window.
The Profiler uses ports 54998 to 55511 to broadcast profiling
data. Make sure these ports are available for outbound traffic
if there is a firewall on the network
[6]
Chapter 1
You should now see the Android device's profiling data gathering in the Profiler
Window.
For ADB profiling, follow the given steps:
1. From the Windows command prompt, run the adb devices command,
which checks if the device is recognized by ADB (if not, then the specific
device drivers for the device may need to be installed and/or USB debugging
needs to be enabled on the target device).
Note that, if the adb devices command isn't found
when it is run from the command prompt, then the
Android SDK folder may need to be appended onto the
Environment's PATH variable.
2. Ensure that the Use Development Mode and Autoconnect Profiler flags are
enabled when the application is built.
3. Attach the Android device to the desktop device via the cable
(for example, USB).
4. Begin building the application with the Build & Run option as normal.
5. Open the Profiler Window in the Unity Editor and select the device under
Active Profiler.
You should now see the Android device's profiling data gathering in the
Profiler window.
Controls
Timeline View
Breakdown View
[7]
Controls
The top bar contains multiple controls we can use to affect what is being profiled and
how deeply in the system data is gathered from. They are:
Record: Enabling this option will make the Profiler continuously record
profiling data. Note that data can only be recorded if Play Mode is enabled
(and not paused) or if Profile Editor is enabled.
[8]
Chapter 1
Deep Profile: Ordinary profiling will only record the time and memory
allocations made by any Unity callback methods, such as Awake(), Start(),
Update(), FixedUpdate(), and so on. Enabling Deep Profile recompiles
our scripts to measure each and every invoked method. This causes an even
greater instrumentation cost during runtime and uses significantly more
memory since data is being collected for the entire call stack at runtime.
As a consequence, Deep Profiling may not even be possible in large projects
running on weak hardware, as Unity may run out of memory before testing
can even begin!
Note that Deep Profiling requires the project to be
recompiled before profiling can begin, so it is best to
avoid toggling the option during runtime.
Because this option measures all methods across our codebase in a blind
fashion, it should not be enabled during most of our profiling tests. This
option is best reserved for when default profiling is not providing enough
detail, or in small test Scenes, which are used to profile a small subset of
game features.
If Deep Profiling is required for larger projects and Scenes, but the Deep
Profile option is too much of a hindrance during runtime, then there are
alternatives that can be found in the upcoming section titled Targeted Profiling
of code segments.
Active Profiler: This drop-down globally offers choices to select the target
instance of Unity we wish to profile; this, as we've learned, can be the
current Editor application, a local standalone instance of our application,
or an instance of our application running on a remote device.
Clear: This clears all profiling data from the Timeline View.
[9]
Frame Selection: The Frame counter shows how many frames have been
profiled, and which frame is currently selected in the Timeline View. There
are two buttons to move the currently selected frame forward or backward
by one frame, and a third button (the Current button) that resets the selected
frame to the most recent frame and keeps that position. This will cause the
Breakdown View to always show the profiling data for the current frame
during runtime profiling.
Timeline View: The Timeline View reveals profiling data that has been
collected during runtime, organized by areas depending on which
component of the engine was involved.
Each Area has multiple colored boxes for various subsections of those
components. These colored boxes can be toggled to reveal/hide the
corresponding data types within the Timeline View.
Each Area focuses on profiling data for a different component of the Unity
engine. When an Area is selected in the Timeline View, essential information
for that component will be revealed in the Breakdown View for the currently
selected frame.
The Breakdown View shows very different information, depending on which
Area is currently selected.
Areas can be removed from the Timeline View by clicking on the
'X' at the top right of an Area. Areas can be restored to the Timeline
View through the Add Profiler option in the Controls bar.
CPU Area
This Area shows CPU Usage for multiple Unity subsystems during runtime, such
as MonoBehaviour components, cameras, some rendering and physics processes,
user interfaces (including the Editor's interface, if we're running through the Editor),
audio processing, the Profiler itself, and more.
There are three ways of displaying CPU Usage data in the Breakdown View:
Hierarchy
Raw Hierarchy
Timeline
The Hierarchy Mode groups similar data elements and global Unity function calls
together for conveniencefor instance, rendering delimiters, such as BeginGUI()
and EndGUI() calls are combined together in this Mode.
[ 10 ]
Chapter 1
The Raw Hierarchy Mode will separate global Unity function calls into individual
lines. This will tend to make the Breakdown View more difficult to read, but may
be helpful if we're trying to count how many times a particular global method has
been invoked, or determining if one of these calls is costing more CPU/memory than
expected. For example, each BeginGUI() and EndGUI() call will be separated into
different entries, possibly cluttering the Breakdown View, making it difficult to read.
Perhaps, the most useful mode for the CPU Area is the Timeline Mode option (not
to be confused with the main Timeline View). This Mode organizes CPU usage
during the current frame by how the call stack expanded and contracted during
processing. Blocks at the top of this view were directly called by the Unity Engine
(such as the Start(), Awake(), or Update() methods), while blocks underneath
them are methods that those methods had called, which can include methods on
other Components or objects.
Meanwhile, the width of a given CPU Timeline Block gives us the relative time it
took to process that method compared to other blocks around it. In addition, method
calls that consume relatively little processing time, relative to the more greedy
methods, are shown as gray boxes to keep them out of sight.
The design of the CPU Timeline Mode offers a very clean and organized way of
determining which particular method in the call stack is consuming the most time,
and how that processing time measures up against other methods being called
during the same frame. This allows us to gauge which method is the biggest culprit
with minimal effort.
For example, let's assume that we are looking at a performance problem in the
following screenshot. We can tell, with a quick glance, that there are three methods
that are causing a problem, and they each consume similar amounts of processing
time, due to having similar widths.
[ 11 ]
In this example, the good news is that we have three possible methods through
which to find performance improvements, which means lots of opportunities to
find code that can be improved. The bad news is that increasing the performance
of one method will only improve about one-third of the total processing for that
frame. Hence, all three methods will need to be examined and improved in order
to minimize the amount of processing time during this frame.
The CPU Area will be most useful during Chapter 2, Scripting Strategies.
Simple Mode
Detailed Mode
Chapter 1
[ 13 ]
It's easy to overlook the obvious when problem solving and performance
optimization is just another form of problem solving. The goal is to use Profilers and
data analysis to search our codebase for clues about where a problem originates, and
how significant it is. It's often very easy to get distracted by invalid data or jump to
conclusions because we're being too impatient or missed a subtle clue. Many of us
have run into occasions, during software debugging, where we could have found
the root cause of the problem much faster if we had simply challenged and verified
our earlier assumptions. Always approaching debugging under the belief that the
problem is highly complex and technical is a good way to waste valuable time and
effort. Performance analysis is no different.
A checklist of tasks would be helpful to keep us focused on the issue, and not waste
time chasing "ghosts". Every project is different and has a different set of concerns
and design paradigms, but the following checklist is general enough that it should be
able to apply to any Unity project:
Verifying the Script appears in the Scene the correct number of times
[ 14 ]
Chapter 1
We should also double-check that the GameObjects they are attached to are
still enabled, since we may have disabled them during earlier testing, or
someone/something has accidentally deactivated the object.
If we expected only one of the Components to appear in the Scene, but the shortlist
revealed more than one, then we may wish to rethink our earlier assumptions about
what's causing the bottlenecks. We may wish to write some initialization code that
prevents this from ever happening again, and/or write some custom Editor helpers
to display warnings to any level designers who might be making this mistake.
Preventing casual mistakes like this is essential for good productivity, since
experience tells us that, if we don't explicitly disallow something, then someone,
somewhere, at some point, for whatever reason, will do it anyway, and cost us a
good deal of analysis work.
[ 16 ]
Chapter 1
One common mistake (that I have admittedly fallen victim to multiple times during
the writing of this book) is: if we are trying to initiate a test with a keystroke and
we have the Profiler open, we should not forget to click back into the Editor's Game
window before triggering the keystroke! If the Profiler is the most recently clicked
window, then the Editor will send keystroke events to that, instead of the runtime
application, and hence no GameObject will catch the event for that keystroke.
Vertical Sync (otherwise known as VSync) is used to match the application's
frame rate to the frame rate of the device it is being displayed on (for example, the
monitor). Executing the Profiler with this feature enabled will generate a lot of spikes
in the CPU usage area under the heading WaitForTargetFPS, as the application
intentionally slows itself down to match the frame rate of the display. This will
generate unnecessary clutter, making it harder to spot the real issue(s). We should
make sure to disable the VSync colored box under the CPU Area when we're on the
lookout for CPU spikes during performance tests. We can disable the VSync feature
entirely by navigating to Edit | Project Settings | Quality and then the subpage for
the currently selected build platform.
We should also ensure that a drop in performance isn't a direct result of a
massive number of exceptions and error messages appearing in the Editor
console. Unity's Debug.Log(), and similar methods such as Debug.LogError(),
Debug.LogWarning(), and so on, are notoriously expensive in terms of CPU usage
and heap memory consumption, which can then cause garbage collection to occur
and even more lost CPU cycles.
This overhead is usually unnoticeable to a human being looking at the project in
Editor Mode, where most errors come from the compiler or misconfigured objects.
But they can be problematic when used during any kind of runtime process;
especially during profiling, where we wish to observe how the game runs in the
absence of external disruptions. For example, if we are missing an object reference
that we were supposed to assign through the Editor and it is being used in an
Update() method, then a single MonoBehaviour could be throwing new exceptions
every single update. This adds lots of unnecessary noise to our profiling data.
Note that we can disable the Info or Warning checkboxes (shown in the following
screenshot) for the project during Play Mode runtime, but it still costs CPU and
memory to execute debug statements, even though they are not being rendered. It
is often a good practice to keep all of these options enabled, to verify that we're not
missing anything important.
[ 17 ]
[ 18 ]
Chapter 1
The BeginSample() method has an overload that allows a custom name for the
sample to appear in the CPU Usage Area's Hierarchy Mode view. For example, the
following code will profile invocations of this method and make the data appear in
the Breakdown View under a custom heading:
void DoSomethingCompletelyStupid() {
Profiler.BeginSample("My Profiler Sample");
List<int> listOfInts = new List<int>();
for(int i = 0; i < 1000000; ++i) {
listOfInts.Add(i);
}
Profiler.EndSample();
}
[ 19 ]
We should expect that invoking this poorly designed method (it generates a list
containing a million integers, and then does absolutely nothing with it) will cause
a huge spike in CPU usage, chew up several Megabytes of memory, and appear in
the Profiler Breakdown View under the heading My Profiler Sample as the following
screenshot shows:
Note that these custom sample names do not appear at the root of the hierarchy
when we perform Deep Profiling. The following screenshot shows the Breakdown
View for the same code under Deep Profiling:
[ 20 ]
Chapter 1
Note how the custom name for the sample does not appear at the top of the sample,
where we may expect it to. It's unclear what causes this phenomenon, but this can
cause some confusion when examining the Deep Profiling data within Hierarchy
Mode, so it is good to be aware of it.
[ 21 ]
UnityEngine;
System;
System.Diagnostics;
System.Collections;
Chapter 1
m_watch.Stop();
float ms = m_watch.ElapsedMilliseconds;
UnityEngine.Debug.Log(string.Format("{0} finished: {1:0.00}ms
total, {2:0.000000}ms per test for {3} tests", m_timerName, ms, ms
/ m_numTests, m_numTests));
}
}
There are three things to note when using this approach. Firstly, we are only making
an average of multiple method invocations. If processing time varies enormously
between invocations, then that will not be well-represented in the final average.
Secondly, if memory access is common, then repeatedly requesting the same blocks
of memory will result in an artificially higher cache hit rate, which will bring the
average time down when compared to a typical invocation. Thirdly, the effects of JIT
compilation will be effectively hidden for similarly artificial reasons as it only affects
the first invocation of the method. JIT compilation is something that will be covered
in more detail in Chapter 7, Masterful Memory Management.
The using block is typically used to safely ensure that unmanaged resources are
properly destroyed when they go out of scope. When the using block ends, it
will automatically invoke the object's Dispose() method to handle any cleanup
operations. In order to achieve this, the object must implement the IDisposable
interface, which forces it to define the Dispose() method.
However, the same language feature can be used to create a distinct code block,
which creates a short-term object, which then automatically processes something
useful when the code block ends.
Note that the using block should not be confused with the
using statement, which is used at the start of a script file
to pull in additional namespaces. It's extremely ironic that
the keyword for managing namespaces in C# has a naming
conflict with another keyword.
[ 23 ]
As a result, the using block and the CustomTimer class give us a clean way of
wrapping our target test code in a way which makes it obvious when and where
it is being used.
Another concern to worry about is application warm up. Unity has a significant
startup cost when a Scene begins, given the number of calls to various GameObjects'
Awake() and Start() methods, as well as initialization of other components such as
the Physics and Rendering systems. This early overhead might only last a second, but
that can have a significant effect on the results of our testing. This makes it crucial that
any runtime testing begins after the application has reached a steady state.
If possible, it would be wise to wrap the target code block in an Input.GetKeyDown()
method check in order to have control over when it is invoked. For example, the
following code will only execute our test method when the Space Bar is pressed:
if (Input.GetKeyDown(KeyCode.Space)) {
int numTests = 1000;
using (new CustomTimer("Controlled Test", numTests)) {
for(int i = 0; i < numTests; ++i) {
TestFunction();
}
}
}
There are three important design features of the CustomTimer class: it only prints a
single log message for the entire test, only reads the value from the Stopwatch after
it has been stopped, and uses string.Format() for generating a custom string.
As explained earlier, Unity's console logging mechanism is prohibitively expensive.
As a result, we should never use these logging methods in the middle of a
profiling test (or even gameplay, for that matter). If we find ourselves absolutely
needing detailed profiling data that prints out lots of individual messages (such as
performing a timing test on each iteration of a loop, to find which iteration is costing
more time than the rest), then it would be wiser to cache the logging data and print it
all at the end, as the CustomTimer class does. This will reduce runtime overhead, at
the cost of some memory consumption. The alternative is that many milliseconds are
lost to printing each Debug.Log() message in the middle of the test, which pollutes
the results.
[ 24 ]
Chapter 1
The second feature is that the Stopwatch is stopped before the value is read. This
is fairly obvious; reading the value while it is still counting might give us a slightly
different value than stopping it first. Unless we dive deep into the Mono project
source code (and the specific version Unity uses), we might not know the exact
implementation of how Stopwatch counts time, at what points CPU ticks are
counted, and at what moments any application context switching is triggered by the
OS. So, it is often better to err on the side of caution and prevent any more counting
before we attempt to access the value.
Finally, there's the usage of string.Format(). This will be covered in more detail
in Chapter 7, Masterful Memory Management, but the short explanation is that this
method is used because generating custom strings using the +operator results in a
surprisingly large amount of memory consumption, which attracts the attention of
the garbage collector. This would conflict with our goal of achieving accurate timing
and analysis.
Profiler binary data can be saved into a file the Script code, but there
is no built-in way to view this data
These issues make it very tricky to perform large-scale or long-term testing with the
Unity Profiler. They have been raised in Unity's Issue Tracker tool for several years,
and there doesn't appear to be any salvation in sight. So, we must rely on our own
ingenuity to solve this problem.
Fortunately, the Profiler class exposes a few methods that we can use to control how
the Profiler logs information:
1. The Profiler.enabled method can be used to enable/disable the Profiler,
which is the equivalent of clicking on the Record button in the Control View
of the Profiler.
Note that changing Profiler.enabled does not change
the visible state of the Record button in the Profiler's
Controls bar. This will cause some confusing conflicts if
we're controlling the Profiler through both code and the
user interface at the same time.
[ 25 ]
2. The Profiler.logFile method sets the current path of the log file that the
Profiler prints data out to. Be aware that this file only contains a printout
of the application's frame rate over time, and none of the useful data we
normally find in the Profiler's Timeline View. To save that kind of data as a
binary file, we must use the options that follow.
3. The Profiler.enableBinaryLog method will enable/disable logging of an
additional file filled with binary data, which includes all of the important
values we want to save from the Timeline and Breakdown Views. The file
location and name will be the same as the value of Profiler.logFile, but
with .data appended to the end.
With these methods, we can generate a simple data-saving tool that will generate
large amounts of Profiler data separated into multiple files. With these files, we will
be able to peruse them at a later date.
After a WWW object completes its current task, such as downloading a file (WWW)
http://docs.unity3d.com/Manual/Coroutines.html
http://docs.unity3d.com/Manual/ExecutionOrder.html
[ 26 ]
Chapter 1
Getting back to the task at hand, the following is the class definition for our
ProfilerDataSaverComponent, which makes use of a Coroutine to repeat
an action every 300 frames:
using UnityEngine;
using System.Text;
using System.Collections;
public class ProfilerDataSaverComponent : MonoBehaviour {
int _count = 0;
void Start() {
Profiler.logFile = "";
}
void Update () {
if (Input.GetKey (KeyCode.LeftControl) && Input.GetKeyDown
(KeyCode.H)) {
StopAllCoroutines();
_count = 0;
StartCoroutine(SaveProfilerData());
}
}
IEnumerator SaveProfilerData() {
// keep calling this method until Play Mode stops
while (true) {
// generate the file path
string filepath = Application.persistentDataPath +
"/profilerLog" + _count;
// set the log file and enable the profiler
Profiler.logFile = filepath;
Profiler.enableBinaryLog = true;
Profiler.enabled = true;
[ 27 ]
Try attaching this Component to any GameObject in the Scene, and press Ctrl + H
(OSX users will want to replace the KeyCode.LeftControl code with something
such as KeyCode.LeftCommand). The Profiler will start gathering information
(whether or not the Profiler Window is open!) and, using a simple Coroutine,
will pump the data out into a series of files under wherever Application.
persistantDataPath is pointing to.
Note that the location of Application.persistantDataPath varies
depending on the Operating System. Check the Unity Documentation
for more details at http://docs.unity3d.com/ScriptReference/
Application-persistentDataPath.html.
[ 28 ]
Chapter 1
Each file pair should contain 300 frames worth of Profiler data, which skirts around
the 300 frame limit in the Profiler window. All we need now is a way of presenting
the data in the Profiler window.
Here is a screenshot of data files that have been generated by
ProfilerDataSaverComponent:
Note that the first file may contain less than 300 frames if
some frames were lost during Profiler warm up.
[ 29 ]
UnityEngine;
UnityEditor;
System.IO;
System.Collections;
System.Collections.Generic;
System.Text.RegularExpressions;
Chapter 1
// not a binary file, add it to the list
Debug.Log ("Found file: " + thisPath);
s_cachedFilePaths.Add (thisPath);
}
}
s_chosenIndex = -1;
}
void OnGUI () {
if (GUILayout.Button ("Find Files")) {
ReadProfilerDataFiles();
}
if (s_cachedFilePaths == null)
return;
EditorGUILayout.Space ();
EditorGUILayout.LabelField ("Files");
EditorGUILayout.BeginHorizontal ();
// create some styles to organize the buttons, and show
// the most recently-selected button with red text
GUIStyle defaultStyle = new GUIStyle(GUI.skin.button);
defaultStyle.fixedWidth = 40f;
GUIStyle highlightedStyle = new GUIStyle (defaultStyle);
highlightedStyle.normal.textColor = Color.red;
for (int i = 0; i < s_cachedFilePaths.Count; ++i) {
// list 5 items per row
if (i % 5 == 0) {
EditorGUILayout.EndHorizontal ();
EditorGUILayout.BeginHorizontal ();
}
GUIStyle thisStyle = null;
if (s_chosenIndex == i) {
thisStyle = highlightedStyle;
} else {
[ 31 ]
The first step in creating any custom EditorWindow is creating a menu entry point
with a [MenuItem] attribute and then creating an instance of a Window object to
control. Both of these occur within the Init() method.
We're also calling the ReadProfilerDataFiles() method during initialization. This
method reads all files found within the Application.persistantDataPath folder
(the same location our ProfilerDataSaverComponent saves data files to) and adds
them to a cache of filenames to use later.
Finally, there is the OnGUI() method. This method does the bulk of the work. It
provides a button to reload the files if needed, verifies that the cached filenames
have been read, and provides a series of buttons to load each file into the Profiler.
It also highlights the most recently clicked button with red text using a custom
GUIStyle, making it easy to see which file's contents are visible in the Profiler
at the current moment.
The ProfilerDataLoaderWindow can be accessed by navigating to Window |
ProfilerDataLoader in the Editor interface, as show in the following screenshot:
[ 32 ]
Chapter 1
Here is a screenshot of the display with multiple files available to be loaded. Clicking
on any of the numbered buttons will push the Profiler data contents of that file into
the Profiler.
Reducing noise
[ 33 ]
Reducing noise
The classical definition of noise in computer science is meaningless data, and a batch
of profiling data that was blindly captured with no specific target in mind is always
full of data which won't interest us. More data takes more time to mentally process
and filter, which can be very distracting. One of the best methods to avoid this it to
simply reduce the amount of data we need to process, by stripping away any data
deemed nonvital to the current situation.
Reducing clutter in the Profiler's graphical interface will make it easier to determine
which component is causing a spike in resource usage. Remember to use the
colored boxes in each Timeline area to narrow the search. However, these settings
are autosaved in the Editor, so be sure to re-enable them for the next profiling
session as this might cause us to miss something important next time!
[ 34 ]
Chapter 1
[ 35 ]
Summary
You learned a great deal throughout this chapter on how to detect performance
issues within your application. You learned about many of the Profiler's features
and secrets, you explored a variety of tactics to investigate performance issues with
a more hands-on-approach, and you've been introduced to a variety of different tips
and strategies to follow. You can use these to improve your productivity immensely,
so long as you appreciate the wisdom behind them and remember to exploit them
when the situation makes it possible.
This chapter has introduced us to the tips, tactics, and strategies we need find a
performance problem that needs improvement. During the remaining chapters,
we will explore methods on how to fix issues, and improve performance whenever
possible. So, give yourself a pat on the back for getting through the boring part first,
and let's move on to learning some strategies to improve our C# scripting practices.
[ 36 ]
www.PacktPub.com
Stay Connected: