Unity3D Course Outline by Saqib Javid
Unity3D Course Outline by Saqib Javid
Unity3D Course Outline by Saqib Javid
Tool: Unity3D
Language: C#
Benefits of Unity3D
• Build a solid foundation for game design and game development that will help you build your own
games.
• This game engine is actively developing and getting more and more features with each release.
• Huge range of supported platforms from the web and mobile devices to high-end PC and consoles.
• Unity has one of the biggest communities.
• Integration with the .NET world.
• Unity offers a lot of ready-to-use solutions and assets.
• It’s easy to learn.
• With Unity’s flexible engine and service ecosystem, you have complete control over creating, running
and improving your video game.
3. Unity3D C# Concepts
3.1 Script As MonoBehavior Component
3.2 What is namespace?
3.3 C# Coding Standards
3.4 Data types & Variables
3.5 Types of Methods
3.6 If Statements
3.7 Switch Statement
3.8 Loops
3.9 Arrays
3.10 Scope And Access Modifiers
3.11 Classes
3.12 Basic Pillars of OOP
5. Physics
5.1 Colliders & Triggers
5.2 Rigid bodies
5.3 Joints
5.4 Raycasting
5.5 Physics Material
6. Animations
6.1 Animation & Animator
6.2 Animation State Machine
6.3 Blend Trees
6.4 Timeline & Cinemachine
7. Advanced Concepts of Unity3D
7.1 Setup Project for Android
7.2 Design Your Project Architecture.
7.3 Debugging
7.4 Profiler
7.5 Navigation
7.6 Light Baking
7.7 Occlusion Culling
7.8 Mobile Game Optimization Tips
8. Projects
8.1 3D Car Parking Game
8.2 Zombie Shooting Game
Introduction
1.1 Unity 3D Introduction
Unity is a cross-platform game engine initially released by Unity Technologies, in 2005. The focus of Unity
lies in the development of both 2D and 3D games and interactive content. Unity now supports
over 20 different target platforms for deploying, while its most popular platforms are the PC, Android and
iOS systems.
Unity features a complete toolkit for designing and building games, including interfaces for graphics, audio,
and level-building tools, requiring minimal use of external programs to work on projects.
In this series, we will be −
Features of Unity3D
• Easy workflow, allowing developers to rapidly assemble scenes in an intuitive editor workspace
• Quality game creation, such as AAA visuals, high-definition audio, and full-throttle action, without any
glitches on screen
• Dedicated tools for both 2D and 3D game creation with shared conventions to make it easy for
developers
• A very unique and flexible animation system to create natural animations in very less time
• Smooth frame rate with reliable performance on all the platforms developers publish their games
• One-click ability to deploy to all platforms from desktops to browsers to mobiles to consoles, within
minutes
• Reduce the time of development by using already created reusable assets available on the huge Asset
Store In summary, compared to other game engines Unity is developer-friendly, easy to use, free for
independent developers, and feature-rich game.
When Unity Hub starts up you may see a windows firewall pop up. Just click the “Allow Access” button.
Now that Unity hub is open we can begin installing a version of Unity.
Navigate to the Installs tab, located on the left hand side.
Next up select the version of Unity, in this case it will be the latest version, which is what we use for our
courses. Make sure the version selected is Unity 2019.4.x
Unity handles the installation of Visual Studio for us, so make sure that the box is ticked for Visual Studio
Community 2019. This is what we will be using to create C# code for our games.
Step 4 - ACTIVATING unity LICENCE
Click Activate New License and the option to choose the type of license to activate (Unity Personal, Unity
Plus or Pro) appears.
Click I Don’t use Unity in a professional capacity. Then click Done.
The interface is highly customizable and can provide you with as much or as little information as you need.
In the upper right hand corner, you’ll see five buttons. Select the last one on the right. This is the Layout
Dropdown. From the list of options, select the 2 by 3 option.
2. Game View
The Game view represents the player’s perspective of game. This is where you can play your game and
see how all the various mechanics work with one another.
3. Hierarchy Window
The Hierarchy window contains a list of all the current GameObjects used in your game. But what is a
GameObject? That’s an easy one: A GameObject is an object in your game.
4. Project Window
The Project window contains all assets used by your game. You can organize your assets by folders.
When you wish to use them, you can simply drag those assets from the Project window to the
Hierarchy window.
Alternatively, you can drag them from the Project window to the Scene view. If you drag files from your
computer into the Project window, Unity will automatically import those as assets.
5. Inspector Window
The Inspector window lets you configure any GameObject. When you select a GameObject in the
Hierarchy, the Inspector will list all the GameObject’s components and their properties.
For instance, a light will have a color field along with an intensity field. You can also alter values on your
GameObjects while the game is being played.
6. Toolbar
You use the toolbar to manipulate the various GameObjects in the Scene view. You’ll use the following
tools as you develop your game, so get familiar with them by trying them in your empty project!
7. Play Buttons
• First is the play button for start and stop your game.
• Second is the pause button. This pauses and lets you make modifications to the game. Just like in
play mode, those modifications will be lost once you stop the game. Editing GameObjects during
play and pausing is a cheat and balancing system that allows you to experiment on your game
without the danger of permanently breaking it.
• Finally is the step button. This lets you step through your game one frame at a time. This is handy
when you want to observe animations on a frame-by-frame basis, or when you want to check the
state of particular GameObjects during gameplay.
8. Miscellaneous Editor Setting
• The first is the Collab drop-down, found on the right hand side of the toolbar. This is one of Unity’s
latest services that helps big teams collaborate on a single project seamlessly.
• The next button is the Services button. The services button is where you can add additional Unity
services to the game. Clicking on the button will prompt you to create a Unity Project ID. Once you
add a Project ID, you will be able to add services to your project.
You can also add:
➢ Analytics
➢ In-Game Ads
➢ Multiplayer Support
➢ In-App Purchasing
➢ Performance Reporting
• Collaborate Next up is the Account button. This lets you manage your Unity account. It allows you to
view your account data, sign in and out, and upgrade.
• The fourth button is the Layers button. You can use Layers for such things as preventing the rendering
of GameObjects, or excluding GameObjects from physics events like collisions.
• The final button, Layouts, lets you create and save the layout of views in your editor and switch
between them. Unity is wonderfully customizable. Each of the different views in a layout can be
resized, docked, moved, or even removed from the editor altogether.
9. Console
The console is used to see the output of code. These outputs can be used to quickly test a line of code
without having to give added functionality for testing.
Three types of messages usually appear in the default console. These messages can be related to most of
the compiler standards:
• Errors
• Warnings
• Messages
Errors: errors are exceptions or issues that will prevent the code from running at all.
Warnings: warnings are also issues, but this will not stop your code from running but may pose issues
during runtime.
Messages: messages are outputs that convey something to the user, but they do not usually cause an
issue.
Even we can have the console output our messages, errors, and warnings. To do that, we will use the
Debug class. The Debug class is a part of MonoBehaviour, which gives us methods to write messages to the
console, quite similar to how you would create normal output messages in your starter programs.
These methods are:
➢ Debug.Log
➢ Debug.LogWarning
➢ Debug.LogError
Basic Concepts of Game
Development
2.1 Introduction of Transform (Position, Rotation, Scale)
The Transform component determines the Position, Rotation, and Scale of each object in the scene. Every
GameObject has a Transform.
Properties
Property: Function:
Rotation Rotation of the Transform around the X, Y, and Z axes, measured in degrees.
Scale Scale of the Transform along X, Y, and Z axes. Value “1” is the original size (size at which
the object was imported).
using UnityEngine;
public class Example : MonoBehaviour
{
// Moves all transform children 10 units upwards!
void Start()
{
this.trasnform.position += Vector3.up * 10.0f;
}
}
A GameObject is a container; we have to add pieces to the GameObject container to make it into a
character, a tree, a light, a sound, or whatever else you would like it to be. Each piece is called a
component.
Depending on what kind of object you wish to create, you add different combinations of
components to a GameObject. You can compare a GameObject with an empty pan and components
with different ingredients that make up your recipe of gameplay. Unity has many different in-built
component types, and you can also make your own components using the Unity Scripting API.
• GameObjects can contain other GameObjects. This behavior allows the organizing and
parenting of GameObjects that are related to each other. More importantly, changes to parent
GameObjects may affect their children ? more on this in just a moment.
• Models are converted into GameObjects. Unity creates GameObjects for the various pieces of
your model that you can alter like any other GameObject.
• Everything contained in the Hierarchy is a GameObject. Even things such as lights and
cameras are GameObjects. If it is in the Hierarchy, it is a GameObject that's subject to your
command.
• What is Prefab?
Actually prefab is the copy of the game object. Using prefab we can store game objects with its properties
and components already set. So we can reuse it in many ways. It contains a hierarchy of game objects. In
other words we can say that prefab is a container which could be empty or contains any number of game
objects.
GameObject.FindWithTag("Car");
• Layers are used throughout Unity as a way to create groups of objects that share particular
characteristics. Layers are primarily used to restrict operations such as raycasting or rendering so that
they are only applied to the groups of objects that are relevant. In the manager, the first eight layers
are defaults used by Unity and are not editable. However, layers from 8 to 31 can be given custom
names just by typing in the appropriate text box. Note that unlike tags, the number of layers cannot be
increased.
gameObject.layer = 29;
• Materials
In Unity 3D, a Material is a file that contains information about the lighting of an object with that material.
A material has nothing to do with collisions, mass, or even physics in general. It is simply used to define
how lighting affects an object with that material.
In unity, Materials are not much more than a container for shaders and textures that can be applied to
models. Most of the customization of Materials depends on which shader is selected for it, although all
shaders have some common functionality.
• Shaders
A shader is a program that defines how every single pixel is drawn on the screen. Shaders are not
programmed in a C# or even in an object oriented programming language at all. Shaders are programmed
in a C-like language called GLSL. This language can give direct instructions to the GPU for fast processing.
Shader's scripts have mathematical calculations and algorithms for calculating the color of each pixel
rendered, based on the lighting input and the material configuration.
If the texture of a model specifies what is drawn on its surface, the shader is what determines how it is
drawn. In other words, we can say that a material contains properties and textures, and shaders dictate
what properties and textures a material can have.
• Textures
Textures are flat images that can be applied to 3D objects. Textures are responsible for models being
colorful and interesting instead of blank and boring.
• Audio Listener
• Audio Source
Let's see these components one by one:
• Audio Listener
Audio Listener is the component that is automatically attached to the main camera every time you create a
scene. It does not have any properties since its only job is to act as the point of perception.
This component listens to all audio playing in the scene and transfers it to the system's speaker. It acts as
the ears of the game. Only one AudioListener should be in a scene for it to function properly.
• Audio Source
The audio source is the primary component that you will attach to a GameObject to make it play sound.
This is the component that is responsible for playing the sound.
To add the Audio Source component, select one GameObject, and go to the Inspector tab. Click on Add
Component and search for Audio Source.
Select Audio Source. Audio source will playback an Audio Clip when triggered through the mixer, through
code or by default, when it awakes. An Audio Clip is a sound file that is loaded into an AudioSource. It can
be any standard audio file such as .wav, .mp3, and so on. An Audio Clip is a component within itself.
There are several different methods for playing audio in Unity, including:
AudioSource.Play to start a single clip from a script.
AudioSource.PlayOneShot to play overlapping, repeating and non-looping sounds.
AudioSource.PlayClipAtPoint to play a clip at a 3D position, without an Audio Source.
AsudioSource.PlayDelayed or audioSource.Playscheduled to play a clip at a time in the future.
Or by selecting Play on Awake on an Audio Source to play a clip automatically when an object loads.
• Directional Light
Directional Light represents large, distant sources that come from a position outside the range of the
game world. This type of light is used to simulate the effect of the sun on a scene. Here’s an example of
how the previous scene changes when directional light is added to it.
The position of the Directional Light does not matter in Unity because the Directional Light exerts the same
influence on all game objects in the scene, regardless of how far away they are from the position of the
light. You can use the transform tool on the Directional Light to change the angles of the light, mimicking
sunset or sunrise.
• Point Light
Unlike Directional Light, a Point Light is located at a point in space and equally sends light out in all
directions. Point Light also has a specified range and only affects objects within this range, which is
indicated by a yellow circle.
The further away an object is from the Point Light, the less it is affected by the light. And if you take an
object out of the circle, it won’t be affected by the light at all.
Point Lights in Unity are designed based on the inverse square law, which states that “The intensity of the
radiation is inversely proportional to the square of the distance.” This means the intensity of the light to an
observer from a source is inversely proportional to the square of the distance from the observer to the
source. Point lights can be used to create a streetlight, a campfire, a lamp in an interrogation room, or
anywhere you want the light to affect only a certain area.
• Spot Light
Point Lights and Spot Lights are similar because, like a Point Light, a Spot Light has a specific location and
range over which the light falls off.
However, unlike the Point Light that sends light out in all directions equally, the Spot Light emits light in
one direction and is constrained to a specific angle, resulting in a cone-shaped region of light. Spot Lights
can be used to create lamps, streetlights, torches, etc. I like using them to create the headlight of a car, like
in the picture below.
• Area Light
Like Spot Lights, Area Lights are only projected in one direction. However, they’re emitted in a rectangular
shape. Unlike the three previously mentioned lights, Area Lights need to be baked before they can take
effect. We’ll talk more about baking in a bit, but here’s an example of what Area Lights look like when used
in a scene.
Unity3D C# Concept
3.1 Script as a MonoBehavior Component
The MonoBehaviour class is the base class from which every Unity script derives, by default. When you
create a C# script from Unity’s project window, it automatically inherits from MonoBehaviour, and
provides you with a template script. See Creating and Using scripts for more information on this.
=====================================================================================
using UnityEngine;
using System.Collections;
}
// Update is called once per frame
void Update () {
}
}
=====================================================================================
The MonoBehaviour class provides the framework which allows you to attach your script to a GameObject
in the editor, as well as providing hooks into useful Events such as Start and Update.
Whenever you are “using” a namespace, you are saying, I want to have access to the code these
namespaces provide.
C# is used for:
• Mobile applications
• Desktop applications
• Web applications
• Games
• Database applications
4. Avoid the use of System data types and prefer using the Predefined data types.
// Avoid // Correct
Int32 employeeId; Int employeeId;
6. For better code indentation and readability always align the curly braces vertically.
7. Always declare the variables as close as possible to their use.
8. Constants should always be declared in UPPER_CASE.
3.4 Data Types & Variables
There are hundreds of types available in Unity and Bolt, but you don’t need to know each of them by heart.
However, you should be familiar with the most common types. Here’s a little summary table:
Integer A number without any decimal value, like 3 or 200.
Float A number with or without decimal values, like 0.5 or 13.25.
Boolean A value that can only be either true or false. Commonly used in logic or in toggles.
String A piece of text, like a name or a message.
Char A single character in a string, often alphabetic or numeric. Rarely used.
Vectors Vectors represent a set of float coordinates.
Vector 2, with X and Y coordinates for 2D;
Vector 3, with X, Y and Z coordinates for 3D;
Vector 4, with X, Y, Z and W coordinates, rarely used.
GameObject Gameobjects are the base entity in Unity scenes. Each game object has a name, a transform
for its position and rotation, and a list of components.
Example:
using System.Collections;
using System.Collections. Generic;
using UnityEngine;
}
}
• Simple Method
• Parameterized Method
• Method Overloading
A method with the same name in a same class but a different parameter is called method overloading.
• Method Overriding
A Method with the same name and the same parameters but in a different class is called method
overriding.
void Start()
{
if (myNumber == 10)
{
print("myNumber is equal to 10");
}
else if (myNumber == 15)
{
print("myNumber is equal to 15");
}
else
{
print("myNumber is not equal to both");
}
}
void Update()
{
}
}
3.7 Switch Statement
Switch Statement just like the if Statement can be used to write a conditional block that will result into
different outputs based on the match expression or variable that is being switched.
using UnityEngine;
using System.Collections;
void Greet()
{
switch (intelligence)
{
case 5:
print ("Why hello there good sir! Let me teach you about Trigonometry!");
break;
case 4:
print ("Hello and good day!");
break;
default:
print ("Incorrect intelligence level.");
break;
}
}
}
3.8 Loops
There are most common types of loop are used in Unity3D C#:
• For Loops:
For Loop is probably the most common and flexible loop. Works by creating a loop with a controllable
number of iterations. Functionally it begins by checking conditions in the loop. After each loop, known as
an iteration, it can optionally increment a value.
The syntax for this has three arguments. The first one is iterator; this is used to count through the
iterations of the loop. The second argument is the condition that must be true for the loop to continue.
Finally, the third argument defines what happens to the iterator in each loop.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
void Start()
{
for (int i = 0; i < numEnemies; i++)
{
Print("Creating enemy number: " + i);
}
}
}
The foreach loop is very simple and easy to use. It has the simplest syntax. The foreach keyword followed
by brackets is used in this loop. You must specify the type of data you want to iterate inside the brackets.
Pick a single element variable name, give a name to this variable whatever you want. This name is used to
access this variable inside the main loop block. After the name, write in a keyword, followed by our List
variable name.
names[0] = "JavaTpoint";
names[1] = "Nikita";
names[2] = "Unity Tutorial";
Declaring an array
To declare an array in C#, you must first say what type of data will be stored in the array. After the type,
specify an open square bracket and then immediately a closed square bracket, []. This will make the
variable an actual array. We also need to specify the size of the array. It simply means how many places are
there in our variable to be accessed.
Syntax: accessModifier datatype[] arrayname = new datatype[arraySize];
Example: public string[] name = new string[4];
To allocate empty values to all places in the array, simply write the "new" keyword followed by the type,
an open square bracket, a number describing the size of the array, and then a closed square bracket.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ArrayExample : MonoBehaviour
{
public int[] playerNumber= new int[5];
void Start()
{
for (int i = 1; i < playerNumber.Length; i++)
{
playerNumber[i] = i;
Debug.Log("Player Number: " +i.ToString());
}
}
}
3.11 Classes
Classes are the blueprints for your objects. Basically, in Unity, all of your scripts will begin with a class
declaration. Unity automatically puts this in the script for you when you create a new C# script. This class
shares the name as the script file that it is in. This is very important because if you change the name of one,
you need to change the name of the other. So, try to name your script sensibly when you create it.
The class is a container for variables and functions and provides, among other things. Class is a nice way to
group things that work together.
They are an organizational tool, something known as object-oriented programming or OOP for short. One
of the principles of object-oriented programming is to split your scripts up into multiple scripts, each one
taking a single role or responsibility classes should, therefore, be dedicated ideally to one task.
The main aim of object-oriented programming is to allow the programmer to develop software in modules.
This is accomplished through objects. Objects contain data, like integers or lists, and functions, which are
usually called methods.
public class Player Player.cs
{
public string name;
public int score;
public void gameData()
{
Debug.Log("Player name = " + name);
Debug.Log("Player power = " + score);
}
}
• Encapsulation
We all have studied encapsulation as the hiding data members and enabling the users to access data using
public methods that we call getters and setters. But why? Let’s forget that and reiterate with a simpler
definition.
Encapsulation is a technique of restricting a user from directly modifying the data members or variables of
a class in order to maintain the integrity of the data. How do we do that? We restrict the access of the
variables by switching the access-modifier to private and exposing public methods that we can use to
access the data
• Inheritance
Inheritance is a technique of acquiring the properties of another class having features in common. It allows
us to increase the reusability and reduce the duplication of code. It is also known as a child-parent
relationship, where a child inherits the properties of its parent. This is the reason it is called ‘is-a
relationship’ where the child is-a type of parent.
• Abstraction
Abstraction is a technique of providing only the essential details to the user by hiding the unnecessary or
irrelevant details of an entity. This helps in reducing the operational complexity at the user-end.
Abstraction enables us to provide a simple interface to a user without asking for complex details to
perform an action. In simpler words, giving the user the ability to drive the car without requiring to
understand tiny details of ‘how does the engine work’.
• Polymorphism
The last and the most important of all 4 pillars of OOP is Polymorphism. Polymorphism means “many
forms”. By its name, it is a feature that allows you to perform an action in multiple or different ways. When
we talk about polymorphism, there isn’t a lot to discuss unless we talk about its types.
There are two types of polymorphism:
➢ Method Overloading – Static Polymorphism (Static Binding)
The method overloading or static polymorphism, also known as Static Binding, also known as compile-
time binding is a type where method calls are defined at the time of compilation. Method overloading
allows us to have multiple methods with the same name having different datatypes of parameter, or a
different number of parameters, or both.
➢ Method Overriding – Dynamic Polymorphism (Dynamic Binding)
In contrast to method overloading, method overriding allows you have to exactly the same signature as
multiple methods, but they should be in multiple different classes. The question is how is this special?
These classes have an IS-A relationship i.e. should have inheritance between them. In other words, in
method overriding or dynamic polymorphism, methods are resolved dynamically at the runtime when
the method is called. This is done based on the reference of the object it is initialized with.
Unity3D Scripting API
API: Application Program Interface
• Awake
Awake is called either when an active GameObject that contains the script is initialized when a Scene loads,
or when a previously inactive GameObject is set to active, or after a GameObject created with
Object.Instantiate is initialized. Use Awake to initialize variables or states before the application starts.
Unity calls Awake only once during the lifetime of the script instance. A script's lifetime lasts until the
Scene that contains it is unloaded. If the Scene is loaded again, Unity loads the script instance again, so
Awake will be called again. If the Scene is loaded multiple times additively, Unity loads several script
instances, so Awake will be called several times (one on each instance).
using UnityEngine;
public class ExampleClass : MonoBehaviour
{
private GameObject target;
void Awake()
{
target = GameObject.FindWithTag("Player");
}
}
• Start
Start is called on the frame when a script is enabled just before any of the Update methods are called the
first time.
Like the Awake function, Start is called exactly once in the lifetime of the script. However, Awake is called
when the script object is initialised, regardless of whether or not the script is enabled. Start may not be
called on the same frame as Awake if the script is not enabled at initialisation time.
using UnityEngine;
public class ExampleClass : MonoBehaviour
{
private GameObject target;
void Awake()
{
target = GameObject.FindWithTag("Player");
}
void Start()
{
If(target = “Player”)
{
Print(“Print the Player Name”);
}
}
• Comparison
void Awake()
{
Debug.Log("Awake");
update = 0.0f;
}
void Update()
{
update += Time.deltaTime;
if (update > 1.0f)
{
update = 0.0f;
Debug.Log("Update");
}
}
}
• FixedUpdate
Update runs once per frame. FixedUpdate can run once, zero, or several times per frame, depending on
how many physics frames per second are set in the time settings, and how fast/slow the framerate is.
It's for this reason that FixedUpdate should be used when applying forces, torques, or other physics-
related functions - because you know it will be executed exactly in sync with the physics engine itself.
MonoBehaviour.FixedUpdate has the frequency of the physics system; it is called every fixed frame-rate
frame. Compute Physics system calculations after FixedUpdate. 0.02 seconds (50 calls per second) is the
default time between calls.
• Vector2
A Vector2 object describes the X and Y position of a game object in Unity. Since it’s a Vector2, and it only
describes X and Y values, it is used for 2D game development.
In a 2D game, if you move a game object with an x amount of units on the X axis, and x amount of units on
the Y axis you will get the position of that game object:
• Vector3
The same goes for a 3D game, except that you can move the game object on X, Y, and Z axis in the game
space. The values for the axis are located in the Position property of the Transform component.
4.4 Enabling & Disabling Components
How to enable and disable components via script:
void Start ()
{
myLight = GetComponent<Light>();
}
void Update ()
{
if(Input.GetKeyUp(KeyCode.Space))
{
myLight.enabled = !myLight.enabled;
}
}
}
• Declaration
GameObject.SetActive(bool value)
• Parameters
value Activate or deactivate the object, where true activates the GameObject and false deactivates the
GameObject.
• Description
Activates/Deactivates the GameObject, depending on the given true or false value.
A GameObject may be inactive because a parent is not active. In that case, calling SetActive will not
activate it, but only set the local state of the GameObject, which you can check using
GameObject.activeSelf. Unity can then use this state when all parents become active.
Deactivating a GameObject disables each component, including attached renderers, colliders, rigidbodies,
and scripts. For example, Unity will no longer call the Update() method of a script attached to a deactivated
GameObject.
4.6 OnEnable() & OnDisable()
• OnEnable()
This function is called when the object becomes enabled and active. This happens when a MonoBehaviour
instance is created, such as when a level is loaded or a GameObject with the script component is
instantiated.
OnEnabled is unique because it is called every time the game object is enabled no matter how many times
this happens. Put code here that needs to be executed each time the object is activated.
• OnDisable()
This function is called when the behaviour becomes disabled.
This is also called when the object is destroyed and can be used for any cleanup code. When scripts are
reloaded after compilation has finished, OnDisable will be called, followed by an OnEnable after the script
has been loaded.
• Translate
Transform.Translate(Vector3 translation) is a function for moving a gameobject in the direction and
distance of translation. Vector3 (x,y,z) is a type of variable used for 3D coordinates.
If relativeTo is left out or set to Space.Self the movement is applied relative to the transform's local axes.
(the x, y and z axes shown when selecting the object inside the Scene View.) If relativeTo is Space.World
the movement is applied relative to the world coordinate system.
• Rotate
Use Transform.Rotate to rotate GameObjects in a variety of ways. The rotation is often provided as an
Euler angle and not a Quaternion.
You can specify a rotation in world axes or local axes.
Example
using UnityEngine;
using System.Collections;
void Update ()
{
if(Input.GetKey(KeyCode.UpArrow))
transform.Translate(Vector3.forward * moveSpeed * Time.deltaTime);
if(Input.GetKey(KeyCode.DownArrow))
transform.Translate(-Vector3.forward * moveSpeed * Time.deltaTime);
if(Input.GetKey(KeyCode.LeftArrow))
transform.Rotate(Vector3.up, -turnSpeed * Time.deltaTime);
if(Input.GetKey(KeyCode.RightArrow))
transform.Rotate(Vector3.up, turnSpeed * Time.deltaTime);
}
}
• Time.deltaTime
The interval in seconds from the last frame to the current one.
Time.deltaTime returns the amount of time in seconds that elapsed since the last frame completed. This
value varies depending on the frames per second (FPS) rate at which your game or app is running.
• Time.timeScale
Time.timeScale controls the rate at which time elapses. You can read this value, or set it to control how
fast time passes, allowing you to create slow-motion effects.
➢ Examples2 of Lerp functions include Color.Lerp and Vector3.Lerp. These work in exactly the same way
as Mathf.Lerp but the ‘from’ and ‘to’ values are of type Color and Vector3 respectively. The third
parameter in each case is still a float representing how much to interpolate. The result of these
functions is finding a colour that is some blend of two given colours and a vector that is some
percentage of the way between the two given vectors.
Let’s look at another example:
Vector3 from = new Vector3 (1f, 2f, 3f);
Vector3 to = new Vector3 (5f, 6f, 7f);
// Here result = (4, 5, 6)
Vector3 result = Vector3.Lerp (from, to, 0.75f);
• Instantiate()
The name should already give you a hint as to what it is for — creating instances of GameObjects to add to
the scene.
Let’s take a look at the syntax of how this is used, break it down and write some sample code:
Instantiate(Object obj, Vector3 position, Quaternion rotation);
In its most common form, the Method takes 3 arguments. The first is an Object. This Object is what we
wish to instantiate in our scene. More often than not, in Unity we will be making use of GameObject
references.
The second argument is a Vector3 that defines a position in space. This is where we wish to create this new
instance.
Finally we have the Object’s initial rotation in the form of a Quaternion. This will depend on what your
plans are for that object.
• Destory()
Now, lets pretend that cube represents an enemy and there’s only one thing to do about enemies:
Destroy() them all!
Lets have a look at the Method’s syntax:
Destroy (Object obj, float t = 0.0f);
This Method is quite simple to use. It takes a reference to the Objectwe want to destroy and an optional
float that will delay the operation by that amount of seconds. By default this is 0.0f which will execute the
Method instantly.
• Input.GetButton
Declaration public static bool GetButton(string buttonName);
Parameters buttonName The name of the button such as Jump.
Returns bool True when an axis has been pressed and not released.
Description
Returns true while the virtual button identified by buttonName is held down.
Think auto fire - this will return true as long as the button is held down. Use this only when implementing
events that trigger an action, eg, shooting a weapon. The buttonName argument will normally be one of
the names in InputManager such as Jump or Fire1. GetButton will return to false when it is released.
Example
// Instantiates a projectile every 0.5 seconds,
// if the Fire1 button (default is Ctrl) is pressed.
using UnityEngine;
using System.Collections;
void Update()
{
myTime = myTime + Time.deltaTime;
• Input.GetAxis
Returns the value of the virtual axis identified by axisName.
The value will be in the range -1...1 for keyboard and joystick input devices. The meaning of this value
depends on the type of input control, for example with a joystick's horizontal axis a value of 1 means the
stick is pushed all the way to the right and a value of -1 means it's all the way to the left; a value of 0 means
the joystick is in its neutral position.
If the axis is mapped to the mouse, the value is different and will not be in the range of -1...1. Instead it'll
be the current mouse delta multiplied by the axis sensitivity. Typically a positive value means the mouse is
moving right/down and a negative value means the mouse is moving left/up.
This is frame-rate independent; you do not need to be concerned about varying frame-rates when using
this value.
To set up your input or view the options for axisName, go to Edit > Project Settings > Input Manager. This
brings up the Input Manager. Expand Axis to see the list of your current inputs. You can use one of these as
the axisName. To rename the input or change the positive button etc., expand one of the options, and
change the name in the Name field or Positive Button field. Also, change the Type to Joystick Axis. To add a
new input, add 1 to the number in the Size field.
Example
using UnityEngine;
using System.Collections;
void Update()
{
// Get the horizontal and vertical axis.
// By default they are mapped to the arrow keys.
// The value is in the range -1 to 1
float translation = Input.GetAxis("Vertical") * speed;
float rotation = Input.GetAxis("Horizontal") * rotationSpeed;
// Make it move 10 meters per second instead of 10 meters per frame...
translation *= Time.deltaTime;
rotation *= Time.deltaTime;
using UnityEngine;
using System.Collections;
void Update()
{
// Get the mouse delta. This is not in the range -1...1
float h = horizontalSpeed * Input.GetAxis("Mouse X");
float v = verticalSpeed * Input.GetAxis("Mouse Y");
transform.Rotate(v, h, 0);
}
}
• Input.GetKey
Returns true while the user holds down the key identified by name.
GetKey will report the status of the named key. This might be used to confirm a key is used for auto fire.
For the list of key identifiers see Input Manager. When dealing with input it is recommended to use
Input.GetAxis and Input.GetButton instead since it allows end-users to configure the keys. Returns true
while the user holds down the key identified by the key KeyCode enum parameter.
Example
using UnityEngine;
using System.Collections;
if (Input.GetKey(KeyCode.DownArrow))
{
print("down arrow key is held down");
}
}
}
4.13 OnMouseDown()
MonoBehaviours have a number of methods which are called automatically under certain circumstances.
These are methods such as Update, which is called every frame, and Start which is called just before the
first Update. Another of these methods is OnMouseDown.
If a MonoBehaviour with an implementation of OnMouseDown is attached to a GameObject with a Collider
then OnMouseDown will be called immediately when the left mouse button is pressed. Here is an
example:
using UnityEngine;
public class Clicker : MonoBehaviour
{
void OnMouseDown()
{
// Code here is called when the GameObject is clicked on.
}
}
This is probably the easiest way to detect mouse clicks. If you are just starting out using Unity and only
have a simple use case then this is recommended. However, if you need a little more flexibility in the way
you handle mouse clicks then using input might be more useful to you.
• Invoke
Declaration Invoke(string methodName, float time);
If time is set to 0 and Invoke is called before the first frame update, the method is invoked at the next
Update cycle before MonoBehaviour.Update. In this case, it's better to call the function directly.
Example
using UnityEngine;
using System.Collections.Generic;
void LaunchProjectile()
{
Rigidbody instance = Instantiate(projectile);
instance.velocity = Random.insideUnitSphere * 5.0f;
}
}
• InvokeRepeating
Invokes the method methodName in time seconds, then repeatedly every repeatRate seconds.
Example
using UnityEngine;
using System.Collections.Generic;
// Starting in 2 seconds.
// a projectile will be launched every 0.3 seconds
void Start()
{
InvokeRepeating("LaunchProjectile", 2.0f, 0.3f);
}
void LaunchProjectile()
{
Rigidbody instance = Instantiate(projectile);
instance.velocity = Random.insideUnitSphere * 5;
}
}
• Enumeration
Enums are a data type that we can use to make easy to read code. Here is a code example of a difficulty
level system taken from the C#.
Example
using UnityEngine;
using System.Collections;
void Start ()
{
Switch(currentLevel)
{
Case LevelSelector.Easy;
Break;
Case LevelSelector.Normal;
Break;
Case LevelSelector.Hard;
Break;
Case LevelSelector.Expert;
Break;
}
}
}
• IEnumerator
A coroutine is a function that allows pausing its execution and resuming from the same point after a
condition is met. We can say, a coroutine is a special type of function used in unity to stop the execution
until some certain condition is met and continues from where it had left off.
This is the main difference between C# functions and Coroutines functions other than the syntax. A typical
function can return any type, whereas coroutines must return an IEnumerator, and we must use yield
before return.
So basically, coroutines facilitate us to break work into multiple frames, you might have thought we can do
this using Update function. You are right, but we do not have any control over the Update function.
Coroutine code can be executed on-demand or at a different frequency (e.g., every 5 seconds instead of
every frame).
IEnumerator MyCoroutine()
{
Debug.Log("Hello world");
yield return null;
//yield must be used before any return
}
Here, the yield is a special return statement. It is what tells Unity to pause the script and continue on the
next frame.
We can use yield in different ways:
yield return null - This will Resume execution after All Update functions have been called, on the next
frame.
yield return new WaitForSeconds(t) - Resumes execution after (Approximately) t seconds.
yield return new WaitForEndOfFrame() - Resumes execution after all cameras and GUI were rendered.
yield return new WaitForFixedUpdate() - Resumes execution after all FixedUpdates have been called on all
scripts.
yield return new WWW(url) - This will resume execution after the web resource at URL was downloaded
or failed.
4.16 PlayerPrefs
PlayerPrefs is a class that stores Player preferences between game sessions. It can store string, float and
integer values into the user’s platform registry.
Unity stores PlayerPrefs in a local registry, without encryption. Do not use PlayerPrefs data to store
sensitive data.
DeleteAll Removes all keys and values from the preferences. Use with caution.
DeleteKey Removes the given key from the PlayerPrefs. If the key does not exist, DeleteKey has no
impact.
GetFloat Returns the value corresponding to key in the preference file if it exists.
GetInt Returns the value corresponding to key in the preference file if it exists.
GetString Returns the value corresponding to key in the preference file if it exists.
HasKey Returns true if the given key exists in PlayerPrefs, otherwise returns false.
Save Saves all modified preferences.
SetFloat Sets the float value of the preference identified by the given key. You can use
PlayerPrefs.GetFloat to retrieve this value.
SetInt Sets a single integer value for the preference identified by the given key. You can use
PlayerPrefs.GetInt to retrieve this value.
SetString Sets a single string value for the preference identified by the given key. You can use
PlayerPrefs.GetString to retrieve this value.
• SetInt
Declaration SetInt(string key, int value);
using UnityEngine;
public class Example : MonoBehaviour
{
public void SetInt(string KeyName, int Value)
{
PlayerPrefs.SetInt(KeyName, Value);
}
public int Getint(string KeyName)
{
return PlayerPrefs.GetInt(KeyName);
}
}
• SetString
Declaration SetString(string key, string value);
using UnityEngine;
public class Example : MonoBehaviour
{
public void SetString(string KeyName, string Value)
{
PlayerPrefs.SetString(KeyName, Value);
}
There are several collider types in Unity3D. Some of them are Box Collider, Capsule Collider, Sphere
Collider, MeshCollider, Terrain Collider, and Wheel Collider. There are also 2D versions of these colliders.
Mesh Collider creates a collider that exactly covers complex meshes. However, physics calculations for
mesh colliders are expensive and if possible, you should avoid using them. Most of the time developers
add few primitive colliders in order to cover the object. Generally, it is sufficient to cover most parts of the
object.
Primitive objects come with colliders attached. If you import a custom mesh into your scene, you need to
add a collider component for that object.
Types of Collider and Triggers:
• OnCollisionEnter(Collision collision)
This function will run once at the moment the collider collides with another collider. The Collider object
passed to the method includes information about the collision, including a reference to the other
GameObject that collider with this collider.
• OnCollisionStay(Collision collision)
This function will run continuously as long as the collision is still happening. A Collision object is also passed
to it with information about the collision happening.
• OnCollisionExit(Collision collision)
This function will run once as soon as the collision stops. A Collision object is also passed to it with
information about the collision that ended.
Example
using UnityEngine;
using System.Collections;
• Use Gravity
This property determines whether gravity affects your game object or not. If it’s set to false, then the
Rigidbody behaves as if it’s in outer space. In the unity interface, you can use arrow keys to navigate your
object in the given plane with the help of arrow keys. If gravity is enabled, the object will fall off as soon as
it crosses the plane's boundary.
• Mass
This property of the Rigidbody is used for defining the mass of your object. By default, the value is read in
kilograms. Different Rigidbodies that have a large difference in masses can cause the physics simulation to
be highly unstable. The interaction between objects of different masses occurs in the same manner as it
would in the real world. For instance, during a collision, a higher mass object pushes a lower mass object
more. A common misconception that rents users’ minds are that heavy objects will fall faster than lighter
ones. This doesn’t hold in the world around us, the speed of fall is dictated by the factors of gravity and
drag.
• Drag
Drag can be interpreted as the amount of air resistance that affects the object when moving from forces.
When the drag value is set to 0, the object is subject to no resistance and can move freely. On the other
hand, if the value of the drag is set to infinity, then the object's movement comes to an immediate halt.
Essentially, drag is used to slow down an object. The higher the value of drag, the slower the movement of
the object becomes.
• Add Force
This property is used to add a force to the Rigidbody. Force is always applied continuously along the
direction of the force vector. Further, specifying the Force Mode enables the user to change the type of
force to acceleration, velocity change, or impulse. Users must keep in mind the fact that they can apply
Force to an active Rigidbody only. If the GameObject is inactive, then AddForce will not affect. Additionally,
the Rigidbody must not be kinematic as well. Once a force is applied, the state of the Rigidbody is set to
awake by default. The value of drag influences the speed of movement of an object under force.
• Angular Drag
Drag can be interpreted as the amount of air resistance that affects the object when rotating from torque
0. As was the case with Drag, setting the value of Angular Drag to 0 eliminates the air resistance over here
as well. However, by setting the value of Angular Drag to infinity, you can’t stop the object from rotating.
The purpose of Angular Drag is only to slow down the rotation of the object. The higher the value of
Angular Drag, the slower the rotation of the object becomes.
• IS Kinematic
When the Is Kinematic property is enabled, the object will no longer be driven by the physics engine.
Forces, joints, or collisions will stop having an impact on the Rigibody. In this state, it can only be
manipulated by its Transform. Hitting objects under Is Kinematic brings about no change to their state as
they can no longer exchange forces. This property comes in handy when you want to move platforms or
wish to animate a Rigidbody with a HingeJoint attached.
• Interpolate
Users are advised to use interpolate in situations where they have to synchronize their graphics with the
physics of their GameObject. As the unity graphics are computed in the update function and the physics in
the fixed update function, occasionally, they happen to be out of sync. To fix this lag, Interpolate is used.
• Collision Detection
This property is used to keep fast-moving objects from passing through other objects without detecting
collisions. For best results, users are encouraged to set this value to
CollisionDetectionMode.ContinuousDynamic for fast-moving objects. As for the other objects that these
need to collide with, you can set them to CollisionDetectionMode.Continuous. These two options are
known for having a big impact on physics performance. However, if you don’t have any problems with fast-
moving objects colliding with one another, then you can leave it set to the default value of
CollisionDetectionMode.Discrete.
• Constraints
Constraints are used for imposing restrictions on the Rigidbody’s motion. It dictates which degrees of
freedom are allowed for the simulation of the Rigidbody. By default, it is set to RigibodyConstraints.None.
There are two modes in Constraints- Freeze Position and Free Rotation. While Freeze Position restricts the
movement of the Rigidbody in the X, Y, and Z axes, Freeze Rotation restricts their rotation around the
same.
5.3 Joints
Joints. If you look around, you will see that joints play a huge role in the physical world. For instance, take a
look at furniture or doors. Joints attach a body to another and depending on the type of joint, movement is
restricted. Even in the human body, there are 360 joints. So of course unity added the concept and tools to
simulate joints in the game.
In unity, you can use the joints to attach a rigidbody to another. According to the requirement, you can
restrict or allow movement. Joints also have other options that can be enabled for specific effects. For
example, you can set a joint to break when the force applied to it exceeds a certain threshold. Some joints
also allow a driving force to occur between the connected objects to set them in motion automatically. For
example, if you want to simulate a pendulum you can child your pendulum object to an empty gameobject.
Then move the child down so that the empty object is the "pivot".Then you can use the rotating code on
the parent. There are main 5 types of Joint:
• Character Joint
You can also call this joint a ball and socket joint. It is so named due to the similarity with a human joint.
Character Joints are mainly used for Ragdoll effects. They are an extended ball-socket joint which allows
you to limit the joint on each axis.
• Configurable Joint
Skeletal joints are emulatable using configurable joints. You can configure this joint to force and restrict
rigid body movement in any degree of freedom. This joint is more flexible and is the most configurable
joint. It incorporates all the functionality of the other joint types and provides greater control of character
movement.
• Fixed Joint
This joint restricts the movement of a rigid body to follow the movement of the rigid body it is attached to.
In unity, you can relate two bodies to transform by parenting one to the other. But this joint can be used to
do the same. Also this joint is used when you want two connected bodies apart.
• Hinge Joint
This joint simulate pendulum like movement. It attaches to a point and rotates around that point
considering it the axis of rotation.
• Spring Joint
A slinky can be simulated using this joint. Keeps rigid bodies apart from each other but lets
the distance between them stretch slightly.
5.4 Raycasting
“Raycasting is the process of shooting an invisible ray from a point, in a specified direction to detect
whether any colliders lay in the path of the ray.” This is useful with side scrolling shooters, FPS, third
person shooters, bullet hell, and even adventure games. Being able to trace where a bullet or laser is going
to travel from start to finish means that we know exactly how it should behave. We can physically watch it
and manipulate it in the game world.
There are following types of Raycasting:
• RayCast
The most popular method would be Raycast method. This method “shoots” the ray from the origin
position in a given direction. We can limit this ray’s length if we want to, but by default, the length is
infinite.
using UnityEngine;
public class RayExample : MonoBehaviour
{
private void FixedUpdate()
{
Ray ray = new Ray(transform.position, transform.forward);
RaycastHit hitResult;
• LineCast
The Linecast method is really similar to the Raycast method. The difference between those two is the
definition. In Raycast, we are defining the start position and direction. In Linecast, we define the start and
end position. This method checks if there is something in between those two points.
using UnityEngine;
public class LineExample : MonoBehaviour
{
private void FixedUpdate()
{
Vector3 startPosition = transform.position;
Vector3 endPosition = startPosition + Vector3.right * 10;
RaycastHit hitResult;
if (Physics.Linecast(startPosition, endPosition, out hitResult))
{
Debug.Log($"Linecast hitted: {hitResult.collider.name}");
}
}
}
Static Friction (0–1): How much force is needed to get the object moving in the first place
basically; 0 means anything gets it going, 1 means it require a heavy amount
of push.
Bounciness (0–1): How bouncy the surface is when something collides with it (or it collides with
something); 0 is your surface made of mud, 1 it is made of rubber.
Friction / Bounce Combine (Average, Minimum, Multiply, Maximum) This tells Unity which physics
material takes priority when making the calculation. Defaults to “average”
where it tries to work out a middle ground, but sometimes it is useful to use
minimum (where the lowest value of the two objects colliding is used) or
maximum (where the highest value is used), e.g. when a rubber ball hits a
pile of mud, you don’t want to bouncing away, so use “Minimum”.
Animations
6.1 Animation & Animator
• Animation
As a Unity developer, you should know the basics of Unity Animation. By basics, it means you should be
able to create basic animations, work with imported animations, learn to use Unity Animator and control
the animation parameters.
Simply put, any action related to a Gameobject is referred to as Animation and the controller used to
control the actions is called Animator.
Let’s try to understand this in detail. But before that you should know the following:
• A game object can have more than one animation in Unity. For example, walking, running, jumping etc.
• You need a controller to control which animation plays at what time.
Now with that in mind. If you want to play a walk animation while the player is moving slowly and play the
run animation when the player is moving fast you can use the Unity animator to make that switch.
Animation is also referred as Animation clip in Unity. You can create an animation or import it from other
software like Blender, Max, and Maya. All Animation files are saved as a dot anim file.
• Animator
An Animator Controller asset is created within Unity and allows you to arrange and maintain a set of
animations for a character or object. In most situations, it is normal to have multiple animations and switch
between them when certain game conditions occur. For example, you could switch from a walk animation
to a jump whenever the spacebar is pressed. However even if you just have a single animation clip you still
need to place it into an animator controller to use it on a Game Object.
The controller has references to the animation clips used within it, and manages the various animation
states and the transitions between them using a so-called State Machine, which could be thought of as a
kind of flow-chart, or a simple program written in a visual programming language within Unity.
In some situations, an Animator Controller is automatically created for you, such as in the situation where
you begin animating a new GameObject using the Animation Window.
In other cases, you would want to create a new Animator Controller asset yourself and begin to add states
to it by dragging in animation clips and creating transitions between them to form a state machine.
Properties
• Controller:An animation controller which lays out animations we can player ,their relations/transitions
between each other and the conditions to do so.
• Avatar: The rig or skeleton that can morph our model.
• Apply Root Motion: If disabled, animations that move like a run animation will not move.
• Update Mode: How the animation frames play.
Normal Updates every frame.
Animate Physics Based on physics time step (50 times per second).
Unscaled Time: Normal, but not tied to Time.timeScale.
The importance of state machines for animation is that they can be designed and updated quite easily with
relatively little coding. Each state has a Motion associated with it that will play whenever the machine is in
that state. This enables an animator or designer to define the possible sequences of character actions and
animations without being concerned about how the code will work.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
void Start()
{
anim.SetBool ("isDoorOpen", false);
trigger = false;
}
void Update()
{
trigger = anim.GetBool ("isDoorOpen");
if(Input.GetKeyDown (KeyCode.Space))
{
if(!trigger)
{
anim.SetBool ("isDoorOpen", true);
}
else
{
anim.SetBool ("isDoorOpen", false);
}
}
}
}
• Transitions are used for transitioning smoothly from one Animation State to another over a given
amount of time. Transitions are specified as part of an Animation State Machine. A transition from one
motion to a completely different motion is usually fine if the transition is quick.
• Blend Trees are used for allowing multiple animations to be blended smoothly by incorporating parts of
them all to varying degrees. The amount that each of the motions contributes to the final effect is
controlled using a blending parameter, which is just one of the numeric animation parameters
associated with the Animator Controller. In order for the blended motion to make sense, the motions
that are blended must be of similar nature and timing. Blend Trees are a special type of state in an
Animation State Machine.
Examples of similar motions could be various walk and run animations. In order for the blend to work well,
the movements in the clips must take place at the same points in normalized time. For example, walking
and running animations can be aligned so that the moments of contact of foot to the floor take place at the
same points in normalized time (e.g. the left foot hits at 0.0 and the right foot at 0.5). Since normalized
time is used, it doesn’t matter if the clips are of different length.
• An animation is basically a clip that represents a state of a game object such as running, idle, walking,
etc. The animator takes the seat of a controller that dynamically switches the states of a game object
that it is attached with, based on predetermined conditions/rules.
So, it can be inferred that animation-animator combination is restricted to changing states of a game
object.
But what if you wanted to impact several objects in a linear sequence at different moments? That
problem gets solved by timeline :)
• Definaton
The Unity Timeline Editor is a built-in tool that allows you to create and edit cinematic content, gameplay
sequences, audio sequences, and complex particle effects. You can move clips around the Timeline, change
when they start, and decide how they should blend and behave with other clips on the track.
• A Timeline Asset is any media (tracks, clips, recorded animations) that can be used in your project. It
could be an outside file or an image. It could also be assets created in Unity, such as an Animator
Controller Asset or an Audio Mixer Asset.
• A playable director connects this asset to a game object resulting in a timeline instance.
• Timeline Overview
1. Timeline Asset: This is a track that is linked to a GameObject that exists in the hierarchy. It will store
keyframes associated with that GameObject in order to perform animations or determine whether the
GameObject is active.
2. Associated GameObject: This is the GameObject that the track is linked to.
3. Frame: This is the current frame in the timeline that has been set. When you want to change
animations, you will set the keyframes at the starting and ending frames.
4. Track Group: As scenes grow, so will the number of tracks. By grouping tracks, you can keep your tracks
organized.
5. Record Button: When this is active, you can change the current frame and set the position and/or
rotation of the GameObject in the scene, and it will record the changes.
6. Curves Icon: Clicking this will open the Curves view to give you the finer details on the associated
animation keyframes so that you can adjust them as needed.
• Publishing format
Unity can build Android applications in the following publishing formats:
• APK
• Android App Bundle (AAB)
• Building
To build your Unity application for Android:
• Select File > Build Settings.
• From the list of platforms in the Platform pane, select Android.
• Mini MVCS
Here is Mini MVCS (Model-View-Controller-Service) architecture for Unity. Mini MVCS (hereafter called
Mini), is specifically designed for the unique aspects of game development in the Unity platform (Scenes,
Prefabs, Serialization, GameObjects, MonoBehaviours, etc…)
• A light-weight custom MVCS Architecture — For high-level organization
• A custom CommandManager — For decoupled, global communication
• Project Organization — A prescriptive solution for structuring your work
• Code Template — A prescriptive solution for best practices
Areas of Concern
Here is an overview of general MVCS fundamentals as it applies to MiniMVCS.
MiniMVCS — This is the parent object which composes the following…
➢ Model Stores data. Sends Events
➢ View Renders audio/video to and captures input from the user. Observes Commands. Sends Events
➢ Controller This is the ‘glue’ that brings everything together. It observes events from other actors and
calls methods on the actors. It observes Commands (from any other Controllers) and sends Commands
➢ Service Loads/sends data from any backend services (e.g. Multiplayer). Sends Events
➢ Context While not officially an ‘actor’, this has an important role and is referenced by each of the 5
concepts above. It is the communication bus providing a way to send messages (called Commands) and
to look up any Model(s) via the included ModelLocator class.
7.3 Debugging
Debugging is a frequently performed task not just for general software developers but also for game
developers. During a debugging process of a game, most issues can be identified by simulating a code
walkthrough.
However, reading a codebase line by line and identifying logical errors is cumbersome. Moreover, when it
comes to Unity application development, this becomes further complex because most of your code blocks
are attached to game objects, and those trigger various game actions. Therefore, you’ll have to find a
systematic way to debug and carry out fixes faster and easier. Fortunately, Unity has provided many
helpful debugging tools that make debugging a piece of cake! Apart from that, you’re free to utilize the
comprehensive debugging tools offered by Visual Studio.
7.4 Profiler
Every game creator knows that smooth performance is essential to creating immersive gaming experiences
– and to achieve that, you need to profile your game.
The idea behind the Profiler is to provide as much timing, memory, and statistical information as possible.
You can learn:
Let’s talk about spikes. This is usual term for situations when your game fps is significantly dropping for a
split second. Why this is called a “spike”? That’s because in Profiler it looks just like a spike standing out the
ground.
7.5 Navigation
Have you ever wondered how the various NPCs (Non-Playable Characters) in a game move around the
game world, avoiding objects and even sometimes avoiding you, only to pop out from behind you to give
you a jump scare. How is this done so realistically? How do these so called bots decide which paths they
should take and which to avoid?
In Unity, this miracle (not so much once you know how it works) is achieved using NavMeshes.
NavMeshes or Navigational Meshes are a part of the Navigation and path finding system of Unity. The
navigation system allows you to create characters that can intelligently move around the game world,
using navigation meshes that are created automatically from your Scene geometry. Dynamic obstacles
allow you to alter the navigation of the characters at runtime, while off-mesh links let you build specific
actions like opening doors or jumping down from a ledge.
• Different Components of NavMesh
The Unity NavMesh system consists of the following pieces:
➢ NavMesh (short for Navigation Mesh) is a data structure which describes the walkable surfaces of the
game world and allows to find path from one walkable location to another in the game world. The data
structure is built, or baked, automatically from your level geometry.
➢ NavMesh Agent component help you to create characters which avoid each other while moving
towards their goal. Agents reason about the game world using the NavMesh and they know how to
avoid each other as well as moving obstacles.
➢ Off-Mesh Link component allows you to incorporate navigation shortcuts which cannot be represented
using a walkable surface. For example, jumping over a ditch or a fence, or opening a door before
walking through it, can be all described as Off-mesh links.
➢ NavMesh Obstacle component allows you to describe moving obstacles the agents should avoid while
navigating the world. A barrel or a crate controlled by the physics system is a good example of an
obstacle. While the obstacle is moving the agents do their best to avoid it, but once the obstacle
becomes stationary it will carve a hole in the navmesh so that the agents can change their paths to
steer around it, or if the stationary obstacle is blocking the path way, the agents can find a different
route.
• Static Occluders have either a Mesh or Terrain Renderer, are opaque, and do not move at runtime.
Examples include walls, buildings, and mountains. If using LOD groups with a GameObject designated
as a Static Occluder, the base LOD (LOD0) is used in the calculation.
• Static Occludees have any type of Renderer and do not move at runtime. Examples include level decor,
like shrubs, or any GameObject that is likely to be occluded. A GameObject can be both. Dynamic
GameObjects can be occluded but can’t occlude other GameObjects.
• Project configuration
There are a few Project Settings that can impact your mobile performance.
• Assets
The asset pipeline can dramatically impact your application’s performance. An experienced technical artist
can help your team define and enforce asset formats, specifications, and import settings for smooth
processes.
Don’t rely on default settings. Use the platform-specific override tab to optimize assets such as textures
and mesh geometry. Incorrect settings might yield larger build sizes, longer build times, and poor memory
usage. Consider using the Presets feature to help customize baseline settings that will enhance a specific
project.
➢ Compress textures
Use Adaptive Scalable Texture Compression (ATSC) for both iOS and Android. The vast majority of
games in development target min-spec devices that support ATSC compression.
➢ Adjust mesh import settings
Much like textures, meshes can consume excess memory if not imported carefully. To minimize
meshes’ memory consumption:
✓ Compress the mesh: Aggressive compression can reduce disk space (memory at runtime, however,
is unaffected). Note that mesh quantization can result in inaccuracy, so experiment with
compression levels to see what works for your models.
✓ Disable Read/Write: Enabling this option duplicates the mesh in memory, which keeps one copy of
the mesh in system memory and another in GPU memory. In most cases, you should disable it (in
Unity 2019.2 and earlier, this option is checked by default).
✓ Disable rigs and BlendShapes: If your mesh does not need skeletal or blendshape animation,
disable these options wherever possible.
✓ Disable normals and tangents: If you are absolutely certain the mesh’s material will not need
normals or tangents, uncheck these options for extra savings.