Unity3D Course Outline by Saqib Javid

Download as pdf or txt
Download as pdf or txt
You are on page 1of 68

Game Development

Tool: Unity3D
Language: C#

Instructor: Muhammad Saqib Javid


Contact: +92-308-8201814
Email: saqib.javid313@gmail.com
LinkedIn: muhammad-saqib-javid-691797118
Importance of Game Development
• Build a solid foundation for game design and game development that will help you build your own
games.
• Create playable game projects - good for your portfolio, or just for your own sense of achievement.
• Develop highly transferable coding problem solving skills.
• Become excellent at using the Unity game engine.
• Learn how object oriented programming works in practice.
• Be part of an amazing and supportive community of people similar to you.

Benefits of Unity3D
• Build a solid foundation for game design and game development that will help you build your own
games.
• This game engine is actively developing and getting more and more features with each release.
• Huge range of supported platforms from the web and mobile devices to high-end PC and consoles.
• Unity has one of the biggest communities.
• Integration with the .NET world.
• Unity offers a lot of ready-to-use solutions and assets.
• It’s easy to learn.
• With Unity’s flexible engine and service ecosystem, you have complete control over creating, running
and improving your video game.

Statistics Information about Unity3D


• The top 34% of free mobile games available on the Play Store and App Store have used Unity Game
Engine for development.
• Mobile games created with Unity justify 71% of the best 1000 titles.
• There are 2.8 billion monthly active users involved with content operated or made by Unity Game
Engine in 2020.
• 5 Billion Downloads per month of applications developed using Unity.
• 94 out of the leading 100 development studios by worldwide revenue are Unity clients.

Game Design & Develop Course Outcome


• Students will be able to understand the concept of a Game Engine.
• Detail knowledge of the Unity Game Engine interface.
• Demonstrate knowledge of the Integrated Development Environment (of programming.
• Demonstrate knowledge of the various interface components of (IDE) in Unity.
• Use the programming structure learned in Visual Basic language as a basis for learning JavaScript and
CSS scripting language to manage, manipulate, and animate the game objects in Unity.
• Understand and apply Object-Oriented Programming techniques in C Sharp.
• Demonstrate the creation and use of their own object classes.
• Debug simple games and activities that demonstrate programming skills learned.
• Design and write a simple game – from idea to player execution.
• How to use 3D & 2D pipeline to create from start to finish 3d assets, from Max, Maya etc to game
engine.
Course Outline
1. Introduction
1.1 Unity 3D Introduction
1.2 Installation
1.3 Unity Editor Introduction, Essential Concept

2. Basic Concept of Game Development


2.1 Introduction of transform (Handling Rotation, Position and Scale)
2.2 Introducing Game Objects
2.3 Introduction of Prefabs
2.4 Tags, Layers
2.5 Particle System
2.6 Materials, Shaders & Textures
2.7 Canvas
2.8 Audio Source
2.9 Lighting Types

3. Unity3D C# Concepts
3.1 Script As MonoBehavior Component
3.2 What is namespace?
3.3 C# Coding Standards
3.4 Data types & Variables
3.5 Types of Methods
3.6 If Statements
3.7 Switch Statement
3.8 Loops
3.9 Arrays
3.10 Scope And Access Modifiers
3.11 Classes
3.12 Basic Pillars of OOP

4. Unity3D Scripting API


4.1 Awake() And Start()
4.2 Update() And FixedUpdate()
4.3 Vector2, Vector3
4.4 Enabling & Disabling Component
4.5 Activate & Deactivate GameObject
4.6 OnEnable() & OnDisable()
4.7 Order of Execution for Events Function
4.8 Translate & Rotate
4.9 Time.deltaTime & Time.timeScale
4.10 Linear Interpolation
4.11 Instantiate & Destroy
4.12 Get Button, Get Key & Get Axis.
4.13 OnMouseDown
4.14 Invoke, Invoke Repeating
4.15 Enumeration, IEnumerator
4.16 PlayerPrefs

5. Physics
5.1 Colliders & Triggers
5.2 Rigid bodies
5.3 Joints
5.4 Raycasting
5.5 Physics Material

6. Animations
6.1 Animation & Animator
6.2 Animation State Machine
6.3 Blend Trees
6.4 Timeline & Cinemachine
7. Advanced Concepts of Unity3D
7.1 Setup Project for Android
7.2 Design Your Project Architecture.
7.3 Debugging
7.4 Profiler
7.5 Navigation
7.6 Light Baking
7.7 Occlusion Culling
7.8 Mobile Game Optimization Tips

8. Projects
8.1 3D Car Parking Game
8.2 Zombie Shooting Game
Introduction
1.1 Unity 3D Introduction
Unity is a cross-platform game engine initially released by Unity Technologies, in 2005. The focus of Unity
lies in the development of both 2D and 3D games and interactive content. Unity now supports
over 20 different target platforms for deploying, while its most popular platforms are the PC, Android and
iOS systems.
Unity features a complete toolkit for designing and building games, including interfaces for graphics, audio,
and level-building tools, requiring minimal use of external programs to work on projects.
In this series, we will be −

• Learning how to use the various fundamentals of Unity


• Understanding how everything works in the engine
• Understanding the basic concepts of game design
• Creating and building actual sample games
• Learning how to deploy your projects to the market

Features of Unity3D

• Easy workflow, allowing developers to rapidly assemble scenes in an intuitive editor workspace
• Quality game creation, such as AAA visuals, high-definition audio, and full-throttle action, without any
glitches on screen
• Dedicated tools for both 2D and 3D game creation with shared conventions to make it easy for
developers
• A very unique and flexible animation system to create natural animations in very less time
• Smooth frame rate with reliable performance on all the platforms developers publish their games
• One-click ability to deploy to all platforms from desktops to browsers to mobiles to consoles, within
minutes
• Reduce the time of development by using already created reusable assets available on the huge Asset
Store In summary, compared to other game engines Unity is developer-friendly, easy to use, free for
independent developers, and feature-rich game.

1.2 Installation & Setting Up


Step 1 – Downloading
Navigate to the download page for Unity Hub: https://unity3d.com/get-unity/download. Then click on
“Download Unity Hub”.
Step 2 - Installing unity hub
Once Unity Hub has downloaded, navigate to where it was downloaded to and run it. Click through the
Installers steps, agreeing to the Terms of Service.
When the installer is finished make sure you have the “Run Unity Hub” box ticked.

When Unity Hub starts up you may see a windows firewall pop up. Just click the “Allow Access” button.
Now that Unity hub is open we can begin installing a version of Unity.
Navigate to the Installs tab, located on the left hand side.

Click on the Add Button

Next up select the version of Unity, in this case it will be the latest version, which is what we use for our
courses. Make sure the version selected is Unity 2019.4.x

Unity handles the installation of Visual Studio for us, so make sure that the box is ticked for Visual Studio
Community 2019. This is what we will be using to create C# code for our games.
Step 4 - ACTIVATING unity LICENCE

To use Unity, you need an activated licence.


Open the Unity Hub and sign into your Unity ID via the account icon in the top right of the window. If you
don’t have an existing Unity account, you can create one by visiting the Unity ID website. Once you have
logged in, click the cog icon in the top right hand corner and navigate to the License Management tab.

Click Activate New License and the option to choose the type of license to activate (Unity Personal, Unity
Plus or Pro) appears.
Click I Don’t use Unity in a professional capacity. Then click Done.

1.3 Unity Editor Introduction, Essential Concept


When Unity first opens, you’ll see a window that looks like this:

The interface is highly customizable and can provide you with as much or as little information as you need.
In the upper right hand corner, you’ll see five buttons. Select the last one on the right. This is the Layout
Dropdown. From the list of options, select the 2 by 3 option.

Your editor should now look like the image below:


1. Scene View
The Scene view is where you construct your game. It’s where you add all models, cameras, and other
pieces that make up your game. This is a 3D window where you can visually place all the assets you’re
using.

2. Game View
The Game view represents the player’s perspective of game. This is where you can play your game and
see how all the various mechanics work with one another.

3. Hierarchy Window
The Hierarchy window contains a list of all the current GameObjects used in your game. But what is a
GameObject? That’s an easy one: A GameObject is an object in your game.

OK, there’s a bit more to it than that! :]


In essence, GameObjects are empty containers that you customize by adding components.
You can create complex behavior of GameObject via scripts. GameObjects can also act like folders,
containing other GameObjects, which makes them quite useful for organizing your scene. Any
GameObjects actively used in your game in the current scene will appear in the Hierarchy window.

4. Project Window
The Project window contains all assets used by your game. You can organize your assets by folders.
When you wish to use them, you can simply drag those assets from the Project window to the
Hierarchy window.
Alternatively, you can drag them from the Project window to the Scene view. If you drag files from your
computer into the Project window, Unity will automatically import those as assets.

5. Inspector Window
The Inspector window lets you configure any GameObject. When you select a GameObject in the
Hierarchy, the Inspector will list all the GameObject’s components and their properties.
For instance, a light will have a color field along with an intensity field. You can also alter values on your
GameObjects while the game is being played.

6. Toolbar
You use the toolbar to manipulate the various GameObjects in the Scene view. You’ll use the following
tools as you develop your game, so get familiar with them by trying them in your empty project!

7. Play Buttons
• First is the play button for start and stop your game.
• Second is the pause button. This pauses and lets you make modifications to the game. Just like in
play mode, those modifications will be lost once you stop the game. Editing GameObjects during
play and pausing is a cheat and balancing system that allows you to experiment on your game
without the danger of permanently breaking it.
• Finally is the step button. This lets you step through your game one frame at a time. This is handy
when you want to observe animations on a frame-by-frame basis, or when you want to check the
state of particular GameObjects during gameplay.
8. Miscellaneous Editor Setting
• The first is the Collab drop-down, found on the right hand side of the toolbar. This is one of Unity’s
latest services that helps big teams collaborate on a single project seamlessly.
• The next button is the Services button. The services button is where you can add additional Unity
services to the game. Clicking on the button will prompt you to create a Unity Project ID. Once you
add a Project ID, you will be able to add services to your project.
You can also add:
➢ Analytics
➢ In-Game Ads
➢ Multiplayer Support
➢ In-App Purchasing
➢ Performance Reporting

• Collaborate Next up is the Account button. This lets you manage your Unity account. It allows you to
view your account data, sign in and out, and upgrade.
• The fourth button is the Layers button. You can use Layers for such things as preventing the rendering
of GameObjects, or excluding GameObjects from physics events like collisions.
• The final button, Layouts, lets you create and save the layout of views in your editor and switch
between them. Unity is wonderfully customizable. Each of the different views in a layout can be
resized, docked, moved, or even removed from the editor altogether.

9. Console

The console is used to see the output of code. These outputs can be used to quickly test a line of code
without having to give added functionality for testing.
Three types of messages usually appear in the default console. These messages can be related to most of
the compiler standards:

• Errors
• Warnings
• Messages
Errors: errors are exceptions or issues that will prevent the code from running at all.
Warnings: warnings are also issues, but this will not stop your code from running but may pose issues
during runtime.
Messages: messages are outputs that convey something to the user, but they do not usually cause an
issue.
Even we can have the console output our messages, errors, and warnings. To do that, we will use the
Debug class. The Debug class is a part of MonoBehaviour, which gives us methods to write messages to the
console, quite similar to how you would create normal output messages in your starter programs.
These methods are:
➢ Debug.Log
➢ Debug.LogWarning
➢ Debug.LogError
Basic Concepts of Game
Development
2.1 Introduction of Transform (Position, Rotation, Scale)
The Transform component determines the Position, Rotation, and Scale of each object in the scene. Every
GameObject has a Transform.

Properties

Property: Function:

Position Position of the Transform in X, Y, and Z coordinates.

Rotation Rotation of the Transform around the X, Y, and Z axes, measured in degrees.

Scale Scale of the Transform along X, Y, and Z axes. Value “1” is the original size (size at which
the object was imported).

using UnityEngine;
public class Example : MonoBehaviour
{
// Moves all transform children 10 units upwards!
void Start()
{
this.trasnform.position += Vector3.up * 10.0f;
}
}

2.2 Introduction of GameObject


The GameObject is the most important thing in the Unity Editor. Every object in your game is a
GameObject. This means that everything thinks of to be in your game has to be a GameObject.
However, a GameObject can't do anything on its own; you have to give it properties before it can
become a character, an environment, or a special effect.

A GameObject is a container; we have to add pieces to the GameObject container to make it into a
character, a tree, a light, a sound, or whatever else you would like it to be. Each piece is called a
component.

Depending on what kind of object you wish to create, you add different combinations of
components to a GameObject. You can compare a GameObject with an empty pan and components
with different ingredients that make up your recipe of gameplay. Unity has many different in-built
component types, and you can also make your own components using the Unity Scripting API.

Three important points to remember:

• GameObjects can contain other GameObjects. This behavior allows the organizing and
parenting of GameObjects that are related to each other. More importantly, changes to parent
GameObjects may affect their children ? more on this in just a moment.
• Models are converted into GameObjects. Unity creates GameObjects for the various pieces of
your model that you can alter like any other GameObject.
• Everything contained in the Hierarchy is a GameObject. Even things such as lights and
cameras are GameObjects. If it is in the Hierarchy, it is a GameObject that's subject to your
command.

2.3 Introduction of Prefabs

• What is Prefab?
Actually prefab is the copy of the game object. Using prefab we can store game objects with its properties
and components already set. So we can reuse it in many ways. It contains a hierarchy of game objects. In
other words we can say that prefab is a container which could be empty or contains any number of game
objects.

• Why do we need Prefab?


In unity you can create anything using prefab - as complex as you like, and save it out for later use. One of
the advantages is that, it allows you to edit in original and update on other copies of it directly. So we don't
have to go and change in individual object.

• What is the difference between Game object and Prefab?


When we duplicate the game object, they all work independently. So if we want to modify any object, we
have to make changes in all the copies of the object.
Prefab solves this problem. In prefab the above process will be done automatically. We just have to modify
prefab and all the instances will be automatically updated because they maintain connection with the
original prefab and will reflect all the changes that are made.
• How to create Prefab?
You can create prefab using following techniques:
In asset, right click and create Prefab, then drag the object from scene to the empty prefab object.
Drag any game object onto project view to make a prefab. When we drag prefab from project view to
scene view it will create the instance of it and will have blue text in hierarchy.

• Where can we use Prefab?


To illustrate the strengths of prefab, let's consider basic situation where they would come in handy:
➢ Bullet shooting
➢ wall building
➢ Rocket launcher
When we have to instantiate object at run time, prefab can be used. Let's take one scenario in which
whenever we press a button, a sphere or ball should appear in the scene.

2.4 Tags & Layers


• Tags are marker values that can be used to identify objects in your project. New tags can be added by
typing in the empty element at the bottom of the list of tags or by increasing the Size value. Decreasing
the size will remove tags from the end of the list.

GameObject.FindWithTag("Car");

• Layers are used throughout Unity as a way to create groups of objects that share particular
characteristics. Layers are primarily used to restrict operations such as raycasting or rendering so that
they are only applied to the groups of objects that are relevant. In the manager, the first eight layers
are defaults used by Unity and are not editable. However, layers from 8 to 31 can be given custom
names just by typing in the appropriate text box. Note that unlike tags, the number of layers cannot be
increased.

gameObject.layer = 29;

2.5 Particle System


Particle Systems help in generating a large number of particles with small lifespans in an efficient manner.
These systems undergo a separate rendering process; they can instantiate particles even when there are
hundreds or thousands of objects.
Now, particles are an ambiguous term in the Particle System; a particle is any individual texture, material
instance or entity that is generated by the particle system. These are not necessarily dots floating around in
space (although they can be!), and they can be used for a ton of different scenarios. A GameObject manages
a Particle System with the Particle System component attached; particle systems do not require any Assets
to set up, although they may require different materials depending on the effect you want.
2.6 Materials, Shaders & Textures
Every beautiful looking game contains a different variety of surfaces. Like metal, plastics, holograms, alien
artifacts, and so on. Specifically, physical based Rendering.
Rendering in Unity uses Shaders, Materials, and Textures. And three of them have a close relationship.

• Materials
In Unity 3D, a Material is a file that contains information about the lighting of an object with that material.
A material has nothing to do with collisions, mass, or even physics in general. It is simply used to define
how lighting affects an object with that material.
In unity, Materials are not much more than a container for shaders and textures that can be applied to
models. Most of the customization of Materials depends on which shader is selected for it, although all
shaders have some common functionality.

• Shaders
A shader is a program that defines how every single pixel is drawn on the screen. Shaders are not
programmed in a C# or even in an object oriented programming language at all. Shaders are programmed
in a C-like language called GLSL. This language can give direct instructions to the GPU for fast processing.
Shader's scripts have mathematical calculations and algorithms for calculating the color of each pixel
rendered, based on the lighting input and the material configuration.
If the texture of a model specifies what is drawn on its surface, the shader is what determines how it is
drawn. In other words, we can say that a material contains properties and textures, and shaders dictate
what properties and textures a material can have.

• Textures
Textures are flat images that can be applied to 3D objects. Textures are responsible for models being
colorful and interesting instead of blank and boring.

• Relationship between them


A material specifies one specific shader to use, and the shader used determines which options are available
in the material. A shader specifies one or more textures variables that it expects to use, and the Material
Inspector in Unity allows you to assign your own texture assets to these these texture variables.
2.7 Canvas
A user interfaces is a crucial part of any video game. While you may think of video games as interactive
stories full of gameplay and adventure, technically they’re just like any other software package. All games
require inputs from the user — even the most basic games ask the user to at least navigate a main menu to
start the game.
Just like any artist’s workflow, painting a user interface starts with a canvas. In Unity, the Canvas
component controls where and how the UI is drawn to the screen.
The Canvas is the area that all UI elements should be inside. The Canvas is a Game Object with a Canvas
component on it, and all UI elements must be children of such a Canvas. Creating a new UI element, such
as an Image using the menu GameObject > UI > Image, automatically creates a Canvas, if there isn't already
a Canvas in the scene. The UI element is created as a child to this Canvas. The Canvas area is shown as a
rectangle in the Scene View. This makes it easy to position UI elements without needing to have the Game
View visible at all times.

• Draw order of elements


UI elements in the Canvas are drawn in the same order they appear in the Hierarchy. The first child is
drawn first, the second child next, and so on. If two UI elements overlap, the later one will appear on top of
the earlier one.
To change which element appear on top of other elements, simply reorder the elements in the Hierarchy
by dragging them.

2.8 Audio Source


Creating the visual elements of a game is only half of the game, adding sounds to your game are just as
important as developing amazing shaders. Unity's sound system is flexible and powerful.
Unity can import most standard audio file formats and has features for playing sounds in 3D space,
optionally with effects like echo and filtering applied. Even Unity can also record audio from any available
microphone on a user's machine for use during gameplay or storage and transmission.
There are two components related to Audio in Unity; they are:

• Audio Listener
• Audio Source
Let's see these components one by one:

• Audio Listener
Audio Listener is the component that is automatically attached to the main camera every time you create a
scene. It does not have any properties since its only job is to act as the point of perception.
This component listens to all audio playing in the scene and transfers it to the system's speaker. It acts as
the ears of the game. Only one AudioListener should be in a scene for it to function properly.
• Audio Source
The audio source is the primary component that you will attach to a GameObject to make it play sound.
This is the component that is responsible for playing the sound.
To add the Audio Source component, select one GameObject, and go to the Inspector tab. Click on Add
Component and search for Audio Source.
Select Audio Source. Audio source will playback an Audio Clip when triggered through the mixer, through
code or by default, when it awakes. An Audio Clip is a sound file that is loaded into an AudioSource. It can
be any standard audio file such as .wav, .mp3, and so on. An Audio Clip is a component within itself.
There are several different methods for playing audio in Unity, including:
AudioSource.Play to start a single clip from a script.
AudioSource.PlayOneShot to play overlapping, repeating and non-looping sounds.
AudioSource.PlayClipAtPoint to play a clip at a 3D position, without an Audio Source.
AsudioSource.PlayDelayed or audioSource.Playscheduled to play a clip at a time in the future.
Or by selecting Play on Awake on an Audio Source to play a clip automatically when an object loads.

2.9 Lighting Types


Lighting in Unity can be daunting, because there are so many ways to light your game, depending on the
result you’re aiming for. From lightmaps to Point Lights and Spot Lights, it’s tough to decide which one to
use in your game.
While Unity makes it easy to place dynamic lights in your game scene and make your game look great, it
can get a bit complicated when you’re choosing between Unity’s many light types, each of which has
unique properties and effects.

• Direct lighting and indirect lighting


To understand lighting in Unity, you must first understand the concept of direct and indirect light.
➢ Direct light is light that has zero or one bounce. Direct lighting is mostly observed when looking
directly at the light source or when staring at an object that is directly receiving the light source.
➢ Indirect lighting is light that has two or more bounces before hitting the observer’s eyes. Here’s a
pictorial description of both light types.
In Unity, indirect light with four bounces and above is known as Global Illumination (GI). GI is a system that
models how the light of one surface affects another surface, and it’s an important part of Unity lighting
that helps give scenes a realistic look.
You can set up and control the number of bounces in your scene by going to Window > Rendering >
Lighting > Lightmap Settings.
Now that we understand what direct and indirect light is in Unity, let’s look at the different light types in
the game engine.
Four types of light in Unity are:
➢ Directional Light
➢ Point Light
➢ Spot Light
➢ Area Light

• Directional Light
Directional Light represents large, distant sources that come from a position outside the range of the
game world. This type of light is used to simulate the effect of the sun on a scene. Here’s an example of
how the previous scene changes when directional light is added to it.

The position of the Directional Light does not matter in Unity because the Directional Light exerts the same
influence on all game objects in the scene, regardless of how far away they are from the position of the
light. You can use the transform tool on the Directional Light to change the angles of the light, mimicking
sunset or sunrise.

• Point Light
Unlike Directional Light, a Point Light is located at a point in space and equally sends light out in all
directions. Point Light also has a specified range and only affects objects within this range, which is
indicated by a yellow circle.
The further away an object is from the Point Light, the less it is affected by the light. And if you take an
object out of the circle, it won’t be affected by the light at all.
Point Lights in Unity are designed based on the inverse square law, which states that “The intensity of the
radiation is inversely proportional to the square of the distance.” This means the intensity of the light to an
observer from a source is inversely proportional to the square of the distance from the observer to the
source. Point lights can be used to create a streetlight, a campfire, a lamp in an interrogation room, or
anywhere you want the light to affect only a certain area.

• Spot Light
Point Lights and Spot Lights are similar because, like a Point Light, a Spot Light has a specific location and
range over which the light falls off.
However, unlike the Point Light that sends light out in all directions equally, the Spot Light emits light in
one direction and is constrained to a specific angle, resulting in a cone-shaped region of light. Spot Lights
can be used to create lamps, streetlights, torches, etc. I like using them to create the headlight of a car, like
in the picture below.

• Area Light
Like Spot Lights, Area Lights are only projected in one direction. However, they’re emitted in a rectangular
shape. Unlike the three previously mentioned lights, Area Lights need to be baked before they can take
effect. We’ll talk more about baking in a bit, but here’s an example of what Area Lights look like when used
in a scene.
Unity3D C# Concept
3.1 Script as a MonoBehavior Component
The MonoBehaviour class is the base class from which every Unity script derives, by default. When you
create a C# script from Unity’s project window, it automatically inherits from MonoBehaviour, and
provides you with a template script. See Creating and Using scripts for more information on this.
=====================================================================================
using UnityEngine;
using System.Collections;

public class BSCS : MonoBehaviour {


// Use this for initialization
void Start () {

}
// Update is called once per frame
void Update () {

}
}
=====================================================================================
The MonoBehaviour class provides the framework which allows you to attach your script to a GameObject
in the editor, as well as providing hooks into useful Events such as Start and Update.

3.2 What is namespace?


A namespace is simply a collection of classes or a namespace is simply put, a library of code.
Whenever you create a new C# script in Unity, by default the script will have 3 different namespaces right
at the top of the script.
Using System.Collections;
Using System.Collections. Generic;
Using UnityEngine;

Whenever you are “using” a namespace, you are saying, I want to have access to the code these
namespaces provide.

3.3 C# Coding Standards


C# is pronounced "C-Sharp".
It is an object-oriented programming language created by Microsoft that runs on the .NET Framework. C#
has roots from the C family, and the language is close to other popular languages like C++ and Java. The
first version was released in year 2002. The latest version, C# 11, was released in November 2022.

C# is used for:
• Mobile applications
• Desktop applications
• Web applications
• Games
• Database applications

And much, much more!


Below are some of the best practices which all the C# Developers should follow:
1. Class and Method names should always be in Pascal Case.
Pascal Case -- is a programming naming convention where the first letter of each compound word in a
variable is capitalized. Example (ItemNumber).

2. Method parameters and Local variables should always be in Camel Case.


Camel Case is a naming convention in which the first letter of each word in a compound word is
capitalized, except for the first word. Example (myClassName).

3. Avoid the use of underscore while naming identifiers.


// Avoid //Correct
public String first_Name; public String firstName;

4. Avoid the use of System data types and prefer using the Predefined data types.
// Avoid // Correct
Int32 employeeId; Int employeeId;

5. Always prefix an interface with letter I


// Avoid // Correct
public interface Employee public interface IEmployee

6. For better code indentation and readability always align the curly braces vertically.
7. Always declare the variables as close as possible to their use.
8. Constants should always be declared in UPPER_CASE.
3.4 Data Types & Variables
There are hundreds of types available in Unity and Bolt, but you don’t need to know each of them by heart.
However, you should be familiar with the most common types. Here’s a little summary table:
Integer A number without any decimal value, like 3 or 200.
Float A number with or without decimal values, like 0.5 or 13.25.
Boolean A value that can only be either true or false. Commonly used in logic or in toggles.
String A piece of text, like a name or a message.
Char A single character in a string, often alphabetic or numeric. Rarely used.
Vectors Vectors represent a set of float coordinates.
Vector 2, with X and Y coordinates for 2D;
Vector 3, with X, Y and Z coordinates for 3D;
Vector 4, with X, Y, Z and W coordinates, rarely used.
GameObject Gameobjects are the base entity in Unity scenes. Each game object has a name, a transform
for its position and rotation, and a list of components.

Example:
using System.Collections;
using System.Collections. Generic;
using UnityEngine;

public class DataType: MonoBehaviour


{
int playerHealth = 100;
float playerVelocity = 14.74f;
string msg = "Welcome to the Game";
bool isGameOver = false;
void Start()
{
Debug.Log("Initial Health : " + playerHealth);
Debug.Log("Player's Volecity : " + playerVelocity + " km/h");
Debug.Log(msg);
Debug.Log("Game Over: " + isGameOver);
}
// Update is called once per frame
void Update()
{

}
}

3.5 Types of Methods


A method is a block of code which only runs when it is called. You can pass data, known as parameters,
into a method. Methods are used to perform certain actions, and they are also known as functions. Why
use methods? To reuse code: define the code once, and use it many times.
Here is some types of methods

• Simple Method

public class PlayerController: MonoBehaviour


{
void Start()
{
PlayerHealth(); //its function calling
}

void PlayerHealth() //its function Declaration


{
}
}

• Parameterized Method

public class PlayerController: MonoBehaviour


{
void Start()
{
PlayerHealth(10); //10 is value that is use for parameter
}

void PlayerHealth(int pHealth) its function Declaration


{
Print (“Player Health is ” + pHealth);
}
}

• Parameter and Return Method

public class PlayerController: MonoBehaviour


{
void Start()
{
Int tempHealth = 5;
tempHealth = tempHealth + PlayerHealth(10); //now tempHealth 20 + 5 = 25
}

Int PlayerHealth(int pHealth) its function Declaration


{
Return pHealth * 2; //here is 10 * 2 and return to function calling
}
}

• Method Overloading

A method with the same name in a same class but a different parameter is called method overloading.

public int Add(int valueOne, int valueTwo)


{
return valueOne + valueTwo;
}

public int Add(int valueOne, int valueTwo, int valueThree)


{
return valueOne + valueTwo + valueThree;
}

• Method Overriding

A Method with the same name and the same parameters but in a different class is called method
overriding.

public class Employee


{
int valueOne = 10;
}
public class ContactEmployee:Employee
{
Print (valueOne);
}
3.6 If Statements
The variables change potentially in many different circumstances. Like when the level changes, when the
player changes their position, and so on. Accordingly, you will often need to check the value of a variable
to branch the execution of your scripts that perform different sets of actions depending on the value.
For example, if bikesPetrol reaches 0 percent, you will perform a death sequence, but if bikesPetrol is at 20
percent, you might only display a warning message.
C# offers two main conditional statements to achieve a program branching like this in Unity. These are the
switch statement and if statement. The ?if statement? has various forms. The most basic form checks for a
condition and will execute a subsequent block of code if and only if the condition is true.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class IfStatement : MonoBehaviour


{
public int myNumber = 15;

void Start()
{
if (myNumber == 10)
{
print("myNumber is equal to 10");
}
else if (myNumber == 15)
{
print("myNumber is equal to 15");

}
else
{
print("myNumber is not equal to both");
}
}
void Update()
{

}
}
3.7 Switch Statement
Switch Statement just like the if Statement can be used to write a conditional block that will result into
different outputs based on the match expression or variable that is being switched.
using UnityEngine;
using System.Collections;

public class ConversationScript : MonoBehaviour


{
public int intelligence = 5;

void Greet()
{
switch (intelligence)
{
case 5:
print ("Why hello there good sir! Let me teach you about Trigonometry!");
break;
case 4:
print ("Hello and good day!");
break;
default:
print ("Incorrect intelligence level.");
break;
}
}
}

3.8 Loops
There are most common types of loop are used in Unity3D C#:

• For Loops:
For Loop is probably the most common and flexible loop. Works by creating a loop with a controllable
number of iterations. Functionally it begins by checking conditions in the loop. After each loop, known as
an iteration, it can optionally increment a value.
The syntax for this has three arguments. The first one is iterator; this is used to count through the
iterations of the loop. The second argument is the condition that must be true for the loop to continue.
Finally, the third argument defines what happens to the iterator in each loop.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class ForLoop : MonoBehaviour


{
int numEnemies = 3;

void Start()
{
for (int i = 0; i < numEnemies; i++)
{
Print("Creating enemy number: " + i);
}
}
}

• For Each Loop

The foreach loop is very simple and easy to use. It has the simplest syntax. The foreach keyword followed
by brackets is used in this loop. You must specify the type of data you want to iterate inside the brackets.
Pick a single element variable name, give a name to this variable whatever you want. This name is used to
access this variable inside the main loop block. After the name, write in a keyword, followed by our List
variable name.

public class ForEachLoop : MonoBehaviour


{
void Start()
{
string[] names = new string[3];

names[0] = "JavaTpoint";
names[1] = "Nikita";
names[2] = "Unity Tutorial";

foreach (string item in names)


{
print(item);
}
}
}
3.9 Arrays
An array is used to store a sequential collection of values of the same type. In short, an array is used to
store lists of values in a single variable. Suppose if you want to store a number of player names, rather than
storing them individually as string p1, string p2, string p3, etc. we can store them in an array.
Arrays are a way of storing a collection of data of the same type together.

Declaring an array
To declare an array in C#, you must first say what type of data will be stored in the array. After the type,
specify an open square bracket and then immediately a closed square bracket, []. This will make the
variable an actual array. We also need to specify the size of the array. It simply means how many places are
there in our variable to be accessed.
Syntax: accessModifier datatype[] arrayname = new datatype[arraySize];
Example: public string[] name = new string[4];
To allocate empty values to all places in the array, simply write the "new" keyword followed by the type,
an open square bracket, a number describing the size of the array, and then a closed square bracket.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ArrayExample : MonoBehaviour
{
public int[] playerNumber= new int[5];

void Start()
{
for (int i = 1; i < playerNumber.Length; i++)
{
playerNumber[i] = i;
Debug.Log("Player Number: " +i.ToString());

}
}
}

3.10 Scope & Access Modifiers


The scope of a variable is the area in code that can be used by the variables. A variable is supposed to be
local to the place in code that it can be used. Code blocks are generally what defines a variable's scope, and
are denoted by the braces.
Access modifiers are the keywords. These modifiers are used to specify the declared accessibility of a
member or a type. There are four access modifiers in C#:

• Public In this, access is not restricted.


• Protected Here, access is limited to the class in which it is contained.
• Internal In this case, access is limited to the current assembly.
• Private Access is limited to the containing type.

3.11 Classes
Classes are the blueprints for your objects. Basically, in Unity, all of your scripts will begin with a class
declaration. Unity automatically puts this in the script for you when you create a new C# script. This class
shares the name as the script file that it is in. This is very important because if you change the name of one,
you need to change the name of the other. So, try to name your script sensibly when you create it.
The class is a container for variables and functions and provides, among other things. Class is a nice way to
group things that work together.
They are an organizational tool, something known as object-oriented programming or OOP for short. One
of the principles of object-oriented programming is to split your scripts up into multiple scripts, each one
taking a single role or responsibility classes should, therefore, be dedicated ideally to one task.
The main aim of object-oriented programming is to allow the programmer to develop software in modules.
This is accomplished through objects. Objects contain data, like integers or lists, and functions, which are
usually called methods.
public class Player Player.cs
{
public string name;
public int score;
public void gameData()
{
Debug.Log("Player name = " + name);
Debug.Log("Player power = " + score);
}
}

using System.Collections; PlayerDetails.cs


using System.Collections.Generic;
using UnityEngine;

public class PlayerDetails : MonoBehaviour


{
private Player P1;
void Start()
{
P1 = new Player();
P1.name = "Bill";
P1.score = 10;
P1.gameData();
}
}

3.12 Basic Pillars of OOP


There are 4 pillars of OOP:

• Encapsulation
We all have studied encapsulation as the hiding data members and enabling the users to access data using
public methods that we call getters and setters. But why? Let’s forget that and reiterate with a simpler
definition.
Encapsulation is a technique of restricting a user from directly modifying the data members or variables of
a class in order to maintain the integrity of the data. How do we do that? We restrict the access of the
variables by switching the access-modifier to private and exposing public methods that we can use to
access the data

• Inheritance
Inheritance is a technique of acquiring the properties of another class having features in common. It allows
us to increase the reusability and reduce the duplication of code. It is also known as a child-parent
relationship, where a child inherits the properties of its parent. This is the reason it is called ‘is-a
relationship’ where the child is-a type of parent.

• Abstraction
Abstraction is a technique of providing only the essential details to the user by hiding the unnecessary or
irrelevant details of an entity. This helps in reducing the operational complexity at the user-end.
Abstraction enables us to provide a simple interface to a user without asking for complex details to
perform an action. In simpler words, giving the user the ability to drive the car without requiring to
understand tiny details of ‘how does the engine work’.

• Polymorphism
The last and the most important of all 4 pillars of OOP is Polymorphism. Polymorphism means “many
forms”. By its name, it is a feature that allows you to perform an action in multiple or different ways. When
we talk about polymorphism, there isn’t a lot to discuss unless we talk about its types.
There are two types of polymorphism:
➢ Method Overloading – Static Polymorphism (Static Binding)
The method overloading or static polymorphism, also known as Static Binding, also known as compile-
time binding is a type where method calls are defined at the time of compilation. Method overloading
allows us to have multiple methods with the same name having different datatypes of parameter, or a
different number of parameters, or both.
➢ Method Overriding – Dynamic Polymorphism (Dynamic Binding)
In contrast to method overloading, method overriding allows you have to exactly the same signature as
multiple methods, but they should be in multiple different classes. The question is how is this special?
These classes have an IS-A relationship i.e. should have inheritance between them. In other words, in
method overriding or dynamic polymorphism, methods are resolved dynamically at the runtime when
the method is called. This is done based on the reference of the object it is initialized with.
Unity3D Scripting API
API: Application Program Interface

4.1 Awake() & Start()

• Awake
Awake is called either when an active GameObject that contains the script is initialized when a Scene loads,
or when a previously inactive GameObject is set to active, or after a GameObject created with
Object.Instantiate is initialized. Use Awake to initialize variables or states before the application starts.
Unity calls Awake only once during the lifetime of the script instance. A script's lifetime lasts until the
Scene that contains it is unloaded. If the Scene is loaded again, Unity loads the script instance again, so
Awake will be called again. If the Scene is loaded multiple times additively, Unity loads several script
instances, so Awake will be called several times (one on each instance).
using UnityEngine;
public class ExampleClass : MonoBehaviour
{
private GameObject target;

void Awake()
{
target = GameObject.FindWithTag("Player");
}
}

• Start

Start is called on the frame when a script is enabled just before any of the Update methods are called the
first time.
Like the Awake function, Start is called exactly once in the lifetime of the script. However, Awake is called
when the script object is initialised, regardless of whether or not the script is enabled. Start may not be
called on the same frame as Awake if the script is not enabled at initialisation time.

using UnityEngine;
public class ExampleClass : MonoBehaviour
{
private GameObject target;

void Awake()
{
target = GameObject.FindWithTag("Player");
}
void Start()
{
If(target = “Player”)
{
Print(“Print the Player Name”);
}
}

• Comparison

Features Awake() Start()


Order of Execution First Function to Start is called after the
be called Awake and OnEnable
Called when Gameobject is active and script is Yes No
disabled
Called when Gameobject is disabled No No
Can be delayed No Yes
Can be used as a coroutine No Yess
Number of times it can be called in the script’s Once Once
lifetime

4.2 Update() & FixedUpdate()


• Update

Update is called every frame, if the MonoBehaviour is enabled.


Update is the most commonly used function to implement any kind of game script. Not every
MonoBehaviour script needs Update.

public class ExampleClass : MonoBehaviour


{
private float update;

void Awake()
{
Debug.Log("Awake");
update = 0.0f;
}

void Update()
{
update += Time.deltaTime;
if (update > 1.0f)
{
update = 0.0f;
Debug.Log("Update");
}
}
}
• FixedUpdate

Update runs once per frame. FixedUpdate can run once, zero, or several times per frame, depending on
how many physics frames per second are set in the time settings, and how fast/slow the framerate is.

It's for this reason that FixedUpdate should be used when applying forces, torques, or other physics-
related functions - because you know it will be executed exactly in sync with the physics engine itself.
MonoBehaviour.FixedUpdate has the frequency of the physics system; it is called every fixed frame-rate
frame. Compute Physics system calculations after FixedUpdate. 0.02 seconds (50 calls per second) is the
default time between calls.

4.3 Vector2, Vector3


In game development, vectors are mainly used to describe the position of a game object, and to determine
its speed and direction. Vectors can be expressed in multiple dimensions, and Unity provides Vector2,
Vector3 and Vector4 classes for working with 2D, 3D, and 4D vectors. We will be mainly focused on
Vector2 and Vector3 since they are used when developing 2D and 3D games with Unity game engine.

• Vector2
A Vector2 object describes the X and Y position of a game object in Unity. Since it’s a Vector2, and it only
describes X and Y values, it is used for 2D game development.
In a 2D game, if you move a game object with an x amount of units on the X axis, and x amount of units on
the Y axis you will get the position of that game object:

• Vector3
The same goes for a 3D game, except that you can move the game object on X, Y, and Z axis in the game
space. The values for the axis are located in the Position property of the Transform component.
4.4 Enabling & Disabling Components
How to enable and disable components via script:

• Create script EnableAndDisableComponent.cs


• Add script to game object.
• Press Play and press key F to enable/disable box collider.
using UnityEngine;
using System.Collections;

public class EnableComponents : MonoBehaviour


{
private Light myLight;

void Start ()
{
myLight = GetComponent<Light>();
}

void Update ()
{
if(Input.GetKeyUp(KeyCode.Space))
{
myLight.enabled = !myLight.enabled;
}
}
}

4.5 Activate & Deactivate GameObject

• Declaration
GameObject.SetActive(bool value)

• Parameters
value Activate or deactivate the object, where true activates the GameObject and false deactivates the
GameObject.

• Description
Activates/Deactivates the GameObject, depending on the given true or false value.
A GameObject may be inactive because a parent is not active. In that case, calling SetActive will not
activate it, but only set the local state of the GameObject, which you can check using
GameObject.activeSelf. Unity can then use this state when all parents become active.
Deactivating a GameObject disables each component, including attached renderers, colliders, rigidbodies,
and scripts. For example, Unity will no longer call the Update() method of a script attached to a deactivated
GameObject.
4.6 OnEnable() & OnDisable()

• OnEnable()
This function is called when the object becomes enabled and active. This happens when a MonoBehaviour
instance is created, such as when a level is loaded or a GameObject with the script component is
instantiated.
OnEnabled is unique because it is called every time the game object is enabled no matter how many times
this happens. Put code here that needs to be executed each time the object is activated.

• OnDisable()
This function is called when the behaviour becomes disabled.
This is also called when the object is destroyed and can be used for any cleanup code. When scripts are
reloaded after compilation has finished, OnDisable will be called, followed by an OnEnable after the script
has been loaded.

4.7 Order of Execution of Events Function


4.8 Translate & Rotate

• Translate
Transform.Translate(Vector3 translation) is a function for moving a gameobject in the direction and
distance of translation. Vector3 (x,y,z) is a type of variable used for 3D coordinates.
If relativeTo is left out or set to Space.Self the movement is applied relative to the transform's local axes.
(the x, y and z axes shown when selecting the object inside the Scene View.) If relativeTo is Space.World
the movement is applied relative to the world coordinate system.

• Rotate
Use Transform.Rotate to rotate GameObjects in a variety of ways. The rotation is often provided as an
Euler angle and not a Quaternion.
You can specify a rotation in world axes or local axes.
Example

using UnityEngine;
using System.Collections;

public class TransformFunctions : MonoBehaviour


{
public float moveSpeed = 10f;
public float turnSpeed = 50f;

void Update ()
{
if(Input.GetKey(KeyCode.UpArrow))
transform.Translate(Vector3.forward * moveSpeed * Time.deltaTime);

if(Input.GetKey(KeyCode.DownArrow))
transform.Translate(-Vector3.forward * moveSpeed * Time.deltaTime);

if(Input.GetKey(KeyCode.LeftArrow))
transform.Rotate(Vector3.up, -turnSpeed * Time.deltaTime);

if(Input.GetKey(KeyCode.RightArrow))
transform.Rotate(Vector3.up, turnSpeed * Time.deltaTime);
}
}

4.9 Time.deltaTime & Time.timeScale

• Time.deltaTime
The interval in seconds from the last frame to the current one.
Time.deltaTime returns the amount of time in seconds that elapsed since the last frame completed. This
value varies depending on the frames per second (FPS) rate at which your game or app is running.

• Time.timeScale
Time.timeScale controls the rate at which time elapses. You can read this value, or set it to control how
fast time passes, allowing you to create slow-motion effects.

4.10 Linear Interpolation


When making games it can sometimes be useful to linearly interpolate between two values. This is done
with a function called Lerp. Linearly interpolating is finding a value that is some percentage between two
given values. For example, we could linearly interpolate between the numbers 3 and 5 by 50% to get the
number 4. This is because 4 is 50% of the way between 3 and 5. In Unity there are several Lerp functions
that can be used for different types.
➢ Example1 we have just used, the equivalent would be the Mathf.Lerp function and would look like this:
// In this case, result = 4
float result = Mathf.Lerp (3f, 5f, 0.5f);
The Mathf.Lerp function takes 3 float parameters: one representing the value to interpolate from; another
representing the value to interpolate to and a final float representing how far to interpolate. In this case,
the interpolation value is 0.5 which means 50%. If it was 0, the function would return the ‘from’ value and
if it was 1 the function would return the ‘to’ value.

➢ Examples2 of Lerp functions include Color.Lerp and Vector3.Lerp. These work in exactly the same way
as Mathf.Lerp but the ‘from’ and ‘to’ values are of type Color and Vector3 respectively. The third
parameter in each case is still a float representing how much to interpolate. The result of these
functions is finding a colour that is some blend of two given colours and a vector that is some
percentage of the way between the two given vectors.
Let’s look at another example:
Vector3 from = new Vector3 (1f, 2f, 3f);
Vector3 to = new Vector3 (5f, 6f, 7f);
// Here result = (4, 5, 6)
Vector3 result = Vector3.Lerp (from, to, 0.75f);

4.11 Instantiate & Destroy


One of the most useful tools in Unity game development is the ability to create and remove GameObjects
from the scene. Anything from making enemies or powerups appear and removing them as they get killed
or collected. The ability to included this behavior in our games is fundamental.
Luckily for us, Unity provides us with two simple Methods to use for these very purposes:

• Instantiate()
The name should already give you a hint as to what it is for — creating instances of GameObjects to add to
the scene.
Let’s take a look at the syntax of how this is used, break it down and write some sample code:
Instantiate(Object obj, Vector3 position, Quaternion rotation);
In its most common form, the Method takes 3 arguments. The first is an Object. This Object is what we
wish to instantiate in our scene. More often than not, in Unity we will be making use of GameObject
references.
The second argument is a Vector3 that defines a position in space. This is where we wish to create this new
instance.
Finally we have the Object’s initial rotation in the form of a Quaternion. This will depend on what your
plans are for that object.
• Destory()
Now, lets pretend that cube represents an enemy and there’s only one thing to do about enemies:
Destroy() them all!
Lets have a look at the Method’s syntax:
Destroy (Object obj, float t = 0.0f);
This Method is quite simple to use. It takes a reference to the Objectwe want to destroy and an optional
float that will delay the operation by that amount of seconds. By default this is 0.0f which will execute the
Method instantly.

4.12 Get Button, Get Key, Get Axis

• Input.GetButton
Declaration public static bool GetButton(string buttonName);
Parameters buttonName The name of the button such as Jump.
Returns bool True when an axis has been pressed and not released.
Description
Returns true while the virtual button identified by buttonName is held down.
Think auto fire - this will return true as long as the button is held down. Use this only when implementing
events that trigger an action, eg, shooting a weapon. The buttonName argument will normally be one of
the names in InputManager such as Jump or Fire1. GetButton will return to false when it is released.
Example
// Instantiates a projectile every 0.5 seconds,
// if the Fire1 button (default is Ctrl) is pressed.
using UnityEngine;
using System.Collections;

public class ExampleClass : MonoBehaviour


{
public GameObject projectile;
public float fireDelta = 0.5F;

private float nextFire = 0.5F;


private GameObject newProjectile;
private float myTime = 0.0F;

void Update()
{
myTime = myTime + Time.deltaTime;

if (Input.GetButton("Fire1") && myTime > nextFire)


{
nextFire = myTime + fireDelta;
newProjectile = Instantiate(projectile, transform.position, transform.rotation) as GameObject;

// create code here that animates the newProjectile

nextFire = nextFire - myTime;


myTime = 0.0F;
}
}
}

• Input.GetAxis
Returns the value of the virtual axis identified by axisName.

The value will be in the range -1...1 for keyboard and joystick input devices. The meaning of this value
depends on the type of input control, for example with a joystick's horizontal axis a value of 1 means the
stick is pushed all the way to the right and a value of -1 means it's all the way to the left; a value of 0 means
the joystick is in its neutral position.

If the axis is mapped to the mouse, the value is different and will not be in the range of -1...1. Instead it'll
be the current mouse delta multiplied by the axis sensitivity. Typically a positive value means the mouse is
moving right/down and a negative value means the mouse is moving left/up.
This is frame-rate independent; you do not need to be concerned about varying frame-rates when using
this value.
To set up your input or view the options for axisName, go to Edit > Project Settings > Input Manager. This
brings up the Input Manager. Expand Axis to see the list of your current inputs. You can use one of these as
the axisName. To rename the input or change the positive button etc., expand one of the options, and
change the name in the Name field or Positive Button field. Also, change the Type to Joystick Axis. To add a
new input, add 1 to the number in the Size field.

Example

using UnityEngine;
using System.Collections;

// A very simplistic car driving on the x-z plane.

public class ExampleClass : MonoBehaviour


{
public float speed = 10.0f;
public float rotationSpeed = 100.0f;

void Update()
{
// Get the horizontal and vertical axis.
// By default they are mapped to the arrow keys.
// The value is in the range -1 to 1
float translation = Input.GetAxis("Vertical") * speed;
float rotation = Input.GetAxis("Horizontal") * rotationSpeed;
// Make it move 10 meters per second instead of 10 meters per frame...
translation *= Time.deltaTime;
rotation *= Time.deltaTime;

// Move translation along the object's z-axis


transform.Translate(0, 0, translation);

// Rotate around our y-axis


transform.Rotate(0, rotation, 0);
}
}

using UnityEngine;
using System.Collections;

// Performs a mouse look.

public class ExampleClass : MonoBehaviour


{
float horizontalSpeed = 2.0f;
float verticalSpeed = 2.0f;

void Update()
{
// Get the mouse delta. This is not in the range -1...1
float h = horizontalSpeed * Input.GetAxis("Mouse X");
float v = verticalSpeed * Input.GetAxis("Mouse Y");

transform.Rotate(v, h, 0);
}
}

• Input.GetKey
Returns true while the user holds down the key identified by name.
GetKey will report the status of the named key. This might be used to confirm a key is used for auto fire.
For the list of key identifiers see Input Manager. When dealing with input it is recommended to use
Input.GetAxis and Input.GetButton instead since it allows end-users to configure the keys. Returns true
while the user holds down the key identified by the key KeyCode enum parameter.
Example
using UnityEngine;
using System.Collections;

public class ExampleClass : MonoBehaviour


{
void Update()
{
if (Input.GetKey(KeyCode.UpArrow))
{
print("up arrow key is held down");
}

if (Input.GetKey(KeyCode.DownArrow))
{
print("down arrow key is held down");
}
}
}

4.13 OnMouseDown()
MonoBehaviours have a number of methods which are called automatically under certain circumstances.
These are methods such as Update, which is called every frame, and Start which is called just before the
first Update. Another of these methods is OnMouseDown.
If a MonoBehaviour with an implementation of OnMouseDown is attached to a GameObject with a Collider
then OnMouseDown will be called immediately when the left mouse button is pressed. Here is an
example:
using UnityEngine;
public class Clicker : MonoBehaviour
{
void OnMouseDown()
{
// Code here is called when the GameObject is clicked on.
}
}

This is probably the easiest way to detect mouse clicks. If you are just starting out using Unity and only
have a simple use case then this is recommended. However, if you need a little more flexibility in the way
you handle mouse clicks then using input might be more useful to you.

4.14 Invoke & Invoke Repeating

• Invoke
Declaration Invoke(string methodName, float time);
If time is set to 0 and Invoke is called before the first frame update, the method is invoked at the next
Update cycle before MonoBehaviour.Update. In this case, it's better to call the function directly.
Example
using UnityEngine;
using System.Collections.Generic;

public class ExampleScript : MonoBehaviour


{
// Launches a projectile in 2 seconds
Rigidbody projectile;
void Start()
{
Invoke("LaunchProjectile", 2.0f);
}

void LaunchProjectile()
{
Rigidbody instance = Instantiate(projectile);
instance.velocity = Random.insideUnitSphere * 5.0f;
}
}

• InvokeRepeating

Declaration InvokeRepeating(string methodName, float time, float repeatRate);

Invokes the method methodName in time seconds, then repeatedly every repeatRate seconds.

Example

using UnityEngine;
using System.Collections.Generic;

// Starting in 2 seconds.
// a projectile will be launched every 0.3 seconds

public class ExampleScript : MonoBehaviour


{
public Rigidbody projectile;

void Start()
{
InvokeRepeating("LaunchProjectile", 2.0f, 0.3f);
}

void LaunchProjectile()
{
Rigidbody instance = Instantiate(projectile);

instance.velocity = Random.insideUnitSphere * 5;
}
}

4.15 Enumeration & IEnumerator

• Enumeration
Enums are a data type that we can use to make easy to read code. Here is a code example of a difficulty
level system taken from the C#.
Example
using UnityEngine;
using System.Collections;

public class Selection : MonoBehaviour


{
Public enum LevelSelectior
{
Easy;
Normal;
Hard;
Expert
}
Public LevelSelector currentLevel;

void Start ()
{
Switch(currentLevel)
{
Case LevelSelector.Easy;
Break;
Case LevelSelector.Normal;
Break;
Case LevelSelector.Hard;
Break;
Case LevelSelector.Expert;
Break;

}
}
}
• IEnumerator
A coroutine is a function that allows pausing its execution and resuming from the same point after a
condition is met. We can say, a coroutine is a special type of function used in unity to stop the execution
until some certain condition is met and continues from where it had left off.
This is the main difference between C# functions and Coroutines functions other than the syntax. A typical
function can return any type, whereas coroutines must return an IEnumerator, and we must use yield
before return.
So basically, coroutines facilitate us to break work into multiple frames, you might have thought we can do
this using Update function. You are right, but we do not have any control over the Update function.
Coroutine code can be executed on-demand or at a different frequency (e.g., every 5 seconds instead of
every frame).
IEnumerator MyCoroutine()
{
Debug.Log("Hello world");
yield return null;
//yield must be used before any return
}
Here, the yield is a special return statement. It is what tells Unity to pause the script and continue on the
next frame.
We can use yield in different ways:
yield return null - This will Resume execution after All Update functions have been called, on the next
frame.
yield return new WaitForSeconds(t) - Resumes execution after (Approximately) t seconds.
yield return new WaitForEndOfFrame() - Resumes execution after all cameras and GUI were rendered.
yield return new WaitForFixedUpdate() - Resumes execution after all FixedUpdates have been called on all
scripts.
yield return new WWW(url) - This will resume execution after the web resource at URL was downloaded
or failed.

4.16 PlayerPrefs
PlayerPrefs is a class that stores Player preferences between game sessions. It can store string, float and
integer values into the user’s platform registry.
Unity stores PlayerPrefs in a local registry, without encryption. Do not use PlayerPrefs data to store
sensitive data.
DeleteAll Removes all keys and values from the preferences. Use with caution.
DeleteKey Removes the given key from the PlayerPrefs. If the key does not exist, DeleteKey has no
impact.
GetFloat Returns the value corresponding to key in the preference file if it exists.
GetInt Returns the value corresponding to key in the preference file if it exists.
GetString Returns the value corresponding to key in the preference file if it exists.
HasKey Returns true if the given key exists in PlayerPrefs, otherwise returns false.
Save Saves all modified preferences.
SetFloat Sets the float value of the preference identified by the given key. You can use
PlayerPrefs.GetFloat to retrieve this value.
SetInt Sets a single integer value for the preference identified by the given key. You can use
PlayerPrefs.GetInt to retrieve this value.
SetString Sets a single string value for the preference identified by the given key. You can use
PlayerPrefs.GetString to retrieve this value.

• SetInt
Declaration SetInt(string key, int value);
using UnityEngine;
public class Example : MonoBehaviour
{
public void SetInt(string KeyName, int Value)
{
PlayerPrefs.SetInt(KeyName, Value);
}
public int Getint(string KeyName)
{
return PlayerPrefs.GetInt(KeyName);
}
}

• SetString
Declaration SetString(string key, string value);
using UnityEngine;
public class Example : MonoBehaviour
{
public void SetString(string KeyName, string Value)
{
PlayerPrefs.SetString(KeyName, Value);
}

public string GetString(string KeyName)


{
return PlayerPrefs.GetString(KeyName);
}
}
Phycics
5.1 Colliders & Triggers
Many games simulate real-world physics and collisions are also a crucial part of these simulations.
Likewise, while we develop games, most of the time we start or stop events based on other events. We use
colliders and/or triggers for these purposes.

• What is the difference between Colliders and Triggers?


Collision: In Unity, colliders are components that allow the physics engine to handle the collisions. The
physics engine simulates collisions using colliders. We can determine which objects will collide with each
other, or how objects will behave under collisions using colliders.
With a collision! When I think of collisions, the first thing I think of is a hard surface collision, Car crashes, a
ball bouncing, and a tree falling are all examples of hard surface collisions!
Basically Collision happens the moment one object touches another. When this happens, there is
information at our fingertips about the collision that can be accessed. As noted below, one of the objects
colliding absolutely needs to have a rigidbody attached for this to work
Trigger: On the other hand, triggers are special setups of colliders. The purpose of triggers is to trigger
events when more than one objects overlap. If an object has a collider that is configured as a trigger, that
object does not collide with any other object but overlap instead.
An Trigger is a pass through collision, where objects don’t bounce off each other, but events can be
triggered when contact is made. I use this with my bullet and enemy collisions so the bullet doesn’t bounce
off the enemy and fly off into oblivion. Instead it appears to get absorbed and just disappear. Collision
information is also available for OnTrigger events. Just like OnCollisionEnter, OnTriggerEnter get’s called
the moment objects collide.

There are several collider types in Unity3D. Some of them are Box Collider, Capsule Collider, Sphere
Collider, MeshCollider, Terrain Collider, and Wheel Collider. There are also 2D versions of these colliders.
Mesh Collider creates a collider that exactly covers complex meshes. However, physics calculations for
mesh colliders are expensive and if possible, you should avoid using them. Most of the time developers
add few primitive colliders in order to cover the object. Generally, it is sufficient to cover most parts of the
object.
Primitive objects come with colliders attached. If you import a custom mesh into your scene, you need to
add a collider component for that object.
Types of Collider and Triggers:

• OnCollisionEnter(Collision collision)
This function will run once at the moment the collider collides with another collider. The Collider object
passed to the method includes information about the collision, including a reference to the other
GameObject that collider with this collider.
• OnCollisionStay(Collision collision)
This function will run continuously as long as the collision is still happening. A Collision object is also passed
to it with information about the collision happening.

• OnCollisionExit(Collision collision)
This function will run once as soon as the collision stops. A Collision object is also passed to it with
information about the collision that ended.
Example
using UnityEngine;
using System.Collections;

public class ExampleClass : MonoBehaviour


{
void Start()
{

void OnCollisionEnter(Collision col)


{
If(collision.gameObject.tag ==“Enemycar”)
{
Print(“Enemy Car Collide with Player Car”);
}
}

void OnCollisionStay(Collision col)


{
If(collision.gameObject.tag ==“Enemycar”)
{
Print(“Enemy Car Continuously Collide with Player Car”);
}
}

void OnCollisionExit(Collision col)


{
If(collision.gameObject.tag ==“Enemycar”)
{
Print(“Enemy Car Stop Collide with Player Car”);
}
}

void OnTriggerEnter(Collider col)


{
If(col.gameObject.tag ==“Enemycar”)
{
Print(“Enemy Car Collide with Player Car”);
}
}
void OnTriggerStay(Collider col)
{
If(col.gameObject.tag ==“Enemycar”)
{
Print(“Enemy Car Collide with Player Car”);
}
}

void OnTriggerExit(Collider col)


{
If(col.gameObject.tag ==“Enemycar”)
{
Print(“Enemy Car Collide with Player Car”);
}
}
}

5.2 Rigid bodies


A big part of working with the Unity game engine involves gaining familiarity with the Unity Rigidbody and
its various properties. Rigidbody in Unity allows your GameObjects to act under the control of physics. It
allows you to interact with the physics of your objects and visualize how Unity is trying to simulate the
physics of the real world.
Discussed below are the properties of the Rigidbody:

• Use Gravity
This property determines whether gravity affects your game object or not. If it’s set to false, then the
Rigidbody behaves as if it’s in outer space. In the unity interface, you can use arrow keys to navigate your
object in the given plane with the help of arrow keys. If gravity is enabled, the object will fall off as soon as
it crosses the plane's boundary.

• Mass
This property of the Rigidbody is used for defining the mass of your object. By default, the value is read in
kilograms. Different Rigidbodies that have a large difference in masses can cause the physics simulation to
be highly unstable. The interaction between objects of different masses occurs in the same manner as it
would in the real world. For instance, during a collision, a higher mass object pushes a lower mass object
more. A common misconception that rents users’ minds are that heavy objects will fall faster than lighter
ones. This doesn’t hold in the world around us, the speed of fall is dictated by the factors of gravity and
drag.

• Drag
Drag can be interpreted as the amount of air resistance that affects the object when moving from forces.
When the drag value is set to 0, the object is subject to no resistance and can move freely. On the other
hand, if the value of the drag is set to infinity, then the object's movement comes to an immediate halt.
Essentially, drag is used to slow down an object. The higher the value of drag, the slower the movement of
the object becomes.
• Add Force
This property is used to add a force to the Rigidbody. Force is always applied continuously along the
direction of the force vector. Further, specifying the Force Mode enables the user to change the type of
force to acceleration, velocity change, or impulse. Users must keep in mind the fact that they can apply
Force to an active Rigidbody only. If the GameObject is inactive, then AddForce will not affect. Additionally,
the Rigidbody must not be kinematic as well. Once a force is applied, the state of the Rigidbody is set to
awake by default. The value of drag influences the speed of movement of an object under force.

• Angular Drag
Drag can be interpreted as the amount of air resistance that affects the object when rotating from torque
0. As was the case with Drag, setting the value of Angular Drag to 0 eliminates the air resistance over here
as well. However, by setting the value of Angular Drag to infinity, you can’t stop the object from rotating.
The purpose of Angular Drag is only to slow down the rotation of the object. The higher the value of
Angular Drag, the slower the rotation of the object becomes.

• IS Kinematic
When the Is Kinematic property is enabled, the object will no longer be driven by the physics engine.
Forces, joints, or collisions will stop having an impact on the Rigibody. In this state, it can only be
manipulated by its Transform. Hitting objects under Is Kinematic brings about no change to their state as
they can no longer exchange forces. This property comes in handy when you want to move platforms or
wish to animate a Rigidbody with a HingeJoint attached.

• Interpolate
Users are advised to use interpolate in situations where they have to synchronize their graphics with the
physics of their GameObject. As the unity graphics are computed in the update function and the physics in
the fixed update function, occasionally, they happen to be out of sync. To fix this lag, Interpolate is used.

• Collision Detection
This property is used to keep fast-moving objects from passing through other objects without detecting
collisions. For best results, users are encouraged to set this value to
CollisionDetectionMode.ContinuousDynamic for fast-moving objects. As for the other objects that these
need to collide with, you can set them to CollisionDetectionMode.Continuous. These two options are
known for having a big impact on physics performance. However, if you don’t have any problems with fast-
moving objects colliding with one another, then you can leave it set to the default value of
CollisionDetectionMode.Discrete.

• Constraints
Constraints are used for imposing restrictions on the Rigidbody’s motion. It dictates which degrees of
freedom are allowed for the simulation of the Rigidbody. By default, it is set to RigibodyConstraints.None.
There are two modes in Constraints- Freeze Position and Free Rotation. While Freeze Position restricts the
movement of the Rigidbody in the X, Y, and Z axes, Freeze Rotation restricts their rotation around the
same.
5.3 Joints
Joints. If you look around, you will see that joints play a huge role in the physical world. For instance, take a
look at furniture or doors. Joints attach a body to another and depending on the type of joint, movement is
restricted. Even in the human body, there are 360 joints. So of course unity added the concept and tools to
simulate joints in the game.
In unity, you can use the joints to attach a rigidbody to another. According to the requirement, you can
restrict or allow movement. Joints also have other options that can be enabled for specific effects. For
example, you can set a joint to break when the force applied to it exceeds a certain threshold. Some joints
also allow a driving force to occur between the connected objects to set them in motion automatically. For
example, if you want to simulate a pendulum you can child your pendulum object to an empty gameobject.
Then move the child down so that the empty object is the "pivot".Then you can use the rotating code on
the parent. There are main 5 types of Joint:

• Character Joint
You can also call this joint a ball and socket joint. It is so named due to the similarity with a human joint.
Character Joints are mainly used for Ragdoll effects. They are an extended ball-socket joint which allows
you to limit the joint on each axis.

• Configurable Joint
Skeletal joints are emulatable using configurable joints. You can configure this joint to force and restrict
rigid body movement in any degree of freedom. This joint is more flexible and is the most configurable
joint. It incorporates all the functionality of the other joint types and provides greater control of character
movement.

• Fixed Joint
This joint restricts the movement of a rigid body to follow the movement of the rigid body it is attached to.
In unity, you can relate two bodies to transform by parenting one to the other. But this joint can be used to
do the same. Also this joint is used when you want two connected bodies apart.

• Hinge Joint
This joint simulate pendulum like movement. It attaches to a point and rotates around that point
considering it the axis of rotation.

• Spring Joint
A slinky can be simulated using this joint. Keeps rigid bodies apart from each other but lets
the distance between them stretch slightly.

5.4 Raycasting
“Raycasting is the process of shooting an invisible ray from a point, in a specified direction to detect
whether any colliders lay in the path of the ray.” This is useful with side scrolling shooters, FPS, third
person shooters, bullet hell, and even adventure games. Being able to trace where a bullet or laser is going
to travel from start to finish means that we know exactly how it should behave. We can physically watch it
and manipulate it in the game world.
There are following types of Raycasting:
• RayCast
The most popular method would be Raycast method. This method “shoots” the ray from the origin
position in a given direction. We can limit this ray’s length if we want to, but by default, the length is
infinite.
using UnityEngine;
public class RayExample : MonoBehaviour
{
private void FixedUpdate()
{
Ray ray = new Ray(transform.position, transform.forward);
RaycastHit hitResult;

if (Physics.Raycast(ray, out hitResult))


{
Debug.Log($"Raycast hitted: {hitResult.collider.name}");
}
}
}

• LineCast

The Linecast method is really similar to the Raycast method. The difference between those two is the
definition. In Raycast, we are defining the start position and direction. In Linecast, we define the start and
end position. This method checks if there is something in between those two points.
using UnityEngine;
public class LineExample : MonoBehaviour
{
private void FixedUpdate()
{
Vector3 startPosition = transform.position;
Vector3 endPosition = startPosition + Vector3.right * 10;
RaycastHit hitResult;
if (Physics.Linecast(startPosition, endPosition, out hitResult))
{
Debug.Log($"Linecast hitted: {hitResult.collider.name}");
}
}
}

5.5 Physics Material


The Physic Material is used to adjust friction and bouncing effects of colliding objects.
To create a Physic Material select Assets > Create > Physic Material from the menu bar. Then drag the
Physic Material from the Project View onto a Collider in the scene.
A brief explanation of Physical Material Properties:
Dynamic Friction (0–1): How much friction applied to the object when in motion. The higher the
friction the more outside force (like gravity or an explosion) impacts it; 0 is
ice, 1 is super-glue sticky.

Static Friction (0–1): How much force is needed to get the object moving in the first place
basically; 0 means anything gets it going, 1 means it require a heavy amount
of push.

Bounciness (0–1): How bouncy the surface is when something collides with it (or it collides with
something); 0 is your surface made of mud, 1 it is made of rubber.

Friction / Bounce Combine (Average, Minimum, Multiply, Maximum) This tells Unity which physics
material takes priority when making the calculation. Defaults to “average”
where it tries to work out a middle ground, but sometimes it is useful to use
minimum (where the lowest value of the two objects colliding is used) or
maximum (where the highest value is used), e.g. when a rubber ball hits a
pile of mud, you don’t want to bouncing away, so use “Minimum”.
Animations
6.1 Animation & Animator

• Animation
As a Unity developer, you should know the basics of Unity Animation. By basics, it means you should be
able to create basic animations, work with imported animations, learn to use Unity Animator and control
the animation parameters.
Simply put, any action related to a Gameobject is referred to as Animation and the controller used to
control the actions is called Animator.
Let’s try to understand this in detail. But before that you should know the following:

• A game object can have more than one animation in Unity. For example, walking, running, jumping etc.
• You need a controller to control which animation plays at what time.
Now with that in mind. If you want to play a walk animation while the player is moving slowly and play the
run animation when the player is moving fast you can use the Unity animator to make that switch.
Animation is also referred as Animation clip in Unity. You can create an animation or import it from other
software like Blender, Max, and Maya. All Animation files are saved as a dot anim file.

• Animator
An Animator Controller asset is created within Unity and allows you to arrange and maintain a set of
animations for a character or object. In most situations, it is normal to have multiple animations and switch
between them when certain game conditions occur. For example, you could switch from a walk animation
to a jump whenever the spacebar is pressed. However even if you just have a single animation clip you still
need to place it into an animator controller to use it on a Game Object.
The controller has references to the animation clips used within it, and manages the various animation
states and the transitions between them using a so-called State Machine, which could be thought of as a
kind of flow-chart, or a simple program written in a visual programming language within Unity.

In some situations, an Animator Controller is automatically created for you, such as in the situation where
you begin animating a new GameObject using the Animation Window.
In other cases, you would want to create a new Animator Controller asset yourself and begin to add states
to it by dragging in animation clips and creating transitions between them to form a state machine.
Properties

• Controller:An animation controller which lays out animations we can player ,their relations/transitions
between each other and the conditions to do so.
• Avatar: The rig or skeleton that can morph our model.
• Apply Root Motion: If disabled, animations that move like a run animation will not move.
• Update Mode: How the animation frames play.
Normal Updates every frame.
Animate Physics Based on physics time step (50 times per second).
Unscaled Time: Normal, but not tied to Time.timeScale.

• Culling Mode: Determines if the animations should play off-screen.


Always Animate Always animate the entire character. Object is animated even when
offscreen.
Cull Update Transforms: Retarget, IK and write of Transforms are disabled when renderers
are not visible.
Cull Completely: Animation is completely disabled when renderers are not visible.

6.2 Animation State Machine


The basic idea is that a character is engaged in some particular kind of action at any given time. The actions
available will depend on the type of gameplay but typical actions include things like idling, walking,
running, jumping, etc. These actions are referred to as states, in the sense that the character is in a “state”
where it is walking, idling or whatever. In general, the character will have restrictions on the next state it
can go to rather than being able to switch immediately from any state to any other. For example, a running
jump can only be taken when the character is already running and not when it is at a standstill, so it should
never switch straight from the idle state to the running jump state. The options for the next state that a
character can enter from its current state are referred to as state transitions. Taken together, the set of
states, the set of transitions and the variable to remember the current state form a state machine.
The states and transitions of a state machine can be represented using a graph diagram, where the nodes
represent the states and the arcs (arrows between nodes) represent the transitions. You can think of the
current state as being a marker or highlight that is placed on one of the nodes and can then only jump to
another node along one of the arrows.

The importance of state machines for animation is that they can be designed and updated quite easily with
relatively little coding. Each state has a Motion associated with it that will play whenever the machine is in
that state. This enables an animator or designer to define the possible sequences of character actions and
animations without being concerned about how the code will work.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class DoorManagerScript : MonoBehaviour


{
public Animator anim;

private bool trigger;

void Start()
{
anim.SetBool ("isDoorOpen", false);
trigger = false;
}

void Update()
{
trigger = anim.GetBool ("isDoorOpen");

if(Input.GetKeyDown (KeyCode.Space))
{
if(!trigger)
{
anim.SetBool ("isDoorOpen", true);
}
else
{
anim.SetBool ("isDoorOpen", false);
}
}
}
}

6.3 Blend Trees


A common task in game animation is to blend between two or more similar motions. Perhaps the best
known example is the blending of walking and running animations according to the character’s speed.
Another example is a character leaning to the left or right as he turns during a run. It is important to
distinguish between Transitions and Blend Trees. While both are used for creating smooth animation, they
are used for different kinds of situations.

• Transitions are used for transitioning smoothly from one Animation State to another over a given
amount of time. Transitions are specified as part of an Animation State Machine. A transition from one
motion to a completely different motion is usually fine if the transition is quick.
• Blend Trees are used for allowing multiple animations to be blended smoothly by incorporating parts of
them all to varying degrees. The amount that each of the motions contributes to the final effect is
controlled using a blending parameter, which is just one of the numeric animation parameters
associated with the Animator Controller. In order for the blended motion to make sense, the motions
that are blended must be of similar nature and timing. Blend Trees are a special type of state in an
Animation State Machine.
Examples of similar motions could be various walk and run animations. In order for the blend to work well,
the movements in the clips must take place at the same points in normalized time. For example, walking
and running animations can be aligned so that the moments of contact of foot to the floor take place at the
same points in normalized time (e.g. the left foot hits at 0.0 and the right foot at 0.5). Since normalized
time is used, it doesn’t matter if the clips are of different length.

Using Blend Trees


To start working with a new Blend Tree, you need to:
• Right-click on empty space on the Animator Controller Window.
• Select Create State > From New Blend Tree from the context menu that appears.
• Double-click on the Blend Tree to enter the Blend Tree Graph.
• The Animator Window now shows a graph of the entire Blend Tree while the Inspector shows the
currently selected node and its immediate children
6.4 Timeline or Cinemachine
We Explores one of the most powerful tools in Unity called as Timeline.
To begin with, one must understand the need of timeline when the option of animation is already
available.

• An animation is basically a clip that represents a state of a game object such as running, idle, walking,
etc. The animator takes the seat of a controller that dynamically switches the states of a game object
that it is attached with, based on predetermined conditions/rules.
So, it can be inferred that animation-animator combination is restricted to changing states of a game
object.
But what if you wanted to impact several objects in a linear sequence at different moments? That
problem gets solved by timeline :)

• Definaton
The Unity Timeline Editor is a built-in tool that allows you to create and edit cinematic content, gameplay
sequences, audio sequences, and complex particle effects. You can move clips around the Timeline, change
when they start, and decide how they should blend and behave with other clips on the track.

• How does it work?


It takes two elements. A timeline asset and a playable director. These components resemble the
relationship of an animation-animator. They both work the same way :)

• A Timeline Asset is any media (tracks, clips, recorded animations) that can be used in your project. It
could be an outside file or an image. It could also be assets created in Unity, such as an Animator
Controller Asset or an Audio Mixer Asset.
• A playable director connects this asset to a game object resulting in a timeline instance.

• Timeline Overview

1. Timeline Asset: This is a track that is linked to a GameObject that exists in the hierarchy. It will store
keyframes associated with that GameObject in order to perform animations or determine whether the
GameObject is active.

2. Associated GameObject: This is the GameObject that the track is linked to.

3. Frame: This is the current frame in the timeline that has been set. When you want to change
animations, you will set the keyframes at the starting and ending frames.
4. Track Group: As scenes grow, so will the number of tracks. By grouping tracks, you can keep your tracks
organized.

5. Record Button: When this is active, you can change the current frame and set the position and/or
rotation of the GameObject in the scene, and it will record the changes.

6. Curves Icon: Clicking this will open the Curves view to give you the finer details on the associated
animation keyframes so that you can adjust them as needed.

• Types of Tracks in Timeline


A track is an action(clip) that applies to a specific game object. Each track facilitates different type of
action(clips).

• Track Group lets you make a collection of tracks.


• Activation Track controls when to activate a track on the Timeline.
• Animation Track allows you to import existing animation clips or create an animation from scratch
directly within Timeline.
• Audio Track allows you to import existing audio clips and make edits. Note that you cannot preview
audio outside of Play mode.
• Control Track lets you take control of time-related elements of a GameObject, such as a Playable
Director or a ParticleSystem.
• Playable Track allows you to trigger other timeline sequences.
• Signal Track establishes a communication channel between the Timeline and outside systems.
• Cinemachine Track Allows you to control Cinemachine cameras within the Timeline.
Advanced Concept of Unity3D
7.1 Setup Project for Android
• Download the Software Development Kit (SDK)
• Download the Java Development Kit (JDK)
• Download the Native Development Kit (NDK)
• Gradle
All these kits are must require.

• Configuring the build


Before you create a build, configure your project’s settings so that Unity builds the application with the
runtime settings and build system properties you want. There are two sets of settings that configure a
Unity build:
• Player Settings
• Build Settings

• Publishing format
Unity can build Android applications in the following publishing formats:
• APK
• Android App Bundle (AAB)

• Building
To build your Unity application for Android:
• Select File > Build Settings.
• From the list of platforms in the Platform pane, select Android.

7.2 Design your Project Architecture


Unity is a powerful suite of tools (Project IDE, Code IDE, run-time) for game development.
IDE Stand for Integrated development environment.
Architecture is a hot topic in Unity — especially related to scaling up to larger Unity game projects.

• Mini MVCS
Here is Mini MVCS (Model-View-Controller-Service) architecture for Unity. Mini MVCS (hereafter called
Mini), is specifically designed for the unique aspects of game development in the Unity platform (Scenes,
Prefabs, Serialization, GameObjects, MonoBehaviours, etc…)
• A light-weight custom MVCS Architecture — For high-level organization
• A custom CommandManager — For decoupled, global communication
• Project Organization — A prescriptive solution for structuring your work
• Code Template — A prescriptive solution for best practices

Areas of Concern
Here is an overview of general MVCS fundamentals as it applies to MiniMVCS.
MiniMVCS — This is the parent object which composes the following…
➢ Model Stores data. Sends Events
➢ View Renders audio/video to and captures input from the user. Observes Commands. Sends Events
➢ Controller This is the ‘glue’ that brings everything together. It observes events from other actors and
calls methods on the actors. It observes Commands (from any other Controllers) and sends Commands
➢ Service Loads/sends data from any backend services (e.g. Multiplayer). Sends Events
➢ Context While not officially an ‘actor’, this has an important role and is referenced by each of the 5
concepts above. It is the communication bus providing a way to send messages (called Commands) and
to look up any Model(s) via the included ModelLocator class.

7.3 Debugging
Debugging is a frequently performed task not just for general software developers but also for game
developers. During a debugging process of a game, most issues can be identified by simulating a code
walkthrough.
However, reading a codebase line by line and identifying logical errors is cumbersome. Moreover, when it
comes to Unity application development, this becomes further complex because most of your code blocks
are attached to game objects, and those trigger various game actions. Therefore, you’ll have to find a
systematic way to debug and carry out fixes faster and easier. Fortunately, Unity has provided many
helpful debugging tools that make debugging a piece of cake! Apart from that, you’re free to utilize the
comprehensive debugging tools offered by Visual Studio.

• Using Logs for Debugging


Using log messages for debugging is a fundamental technique for understanding how your functionalities
work. You can debug anything and everything that you want. There is for it. In fact, debugging will help
identify critical bugs and help solve problems faster.

• Using the Debugger


Unity provides a Debug class that exposes a vast set of methods that help provide debug statements in the
code. All these statements can be observed in the “Unity Debug Console.”
The simplest method offered by Unity is Debug.Log(message:object). Unity allows developers to add debug
logs based on the priority. For example, you could log an error log or warning log by using methods such as
Debug.LogWarning() or Debug.LogError(). The method accepts a variable of type object or is a subclass of
”object.” It allows you to parse debugging messages and data being processed.
Add the below code block to your DebugScript.cs using Visual Studio editor:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class DebugScript : MonoBehaviour
{
// Start is called before the first frame updatevoid Start()
{
string debugMessage = "This is a sample debugging message";
Debug.Log(debugMessage); // this will print the message in the debugging console.
}
}

7.4 Profiler
Every game creator knows that smooth performance is essential to creating immersive gaming experiences
– and to achieve that, you need to profile your game.

The idea behind the Profiler is to provide as much timing, memory, and statistical information as possible.
You can learn:

• how long your game needs to compute and render a frame


• What takes so long? (scripts, physics, graphics etc.)
• the cause of spikes (short moments when your game fps is dropping)
• what is generating garbage
• how much memory the game allocates
• and many more…
1-3 are categories of profiling methods. There is more, but these three will be the most interesting. The
first one is CPU Usage. Here you will learn the total time needed for your frame to compute and render.
Times are categorized to Rendering, Scripts, Physics, etc.
1. Rendering category shows you information about Batches, SetPass Calls (formerly draw calls), Triangles
and Vertices count.
2. Memory category informs you about memory statistics. Be sure that your game is not consuming too
much!
3. This is CPU usage profiling section. Here you will find a lot more information about theUnity internals and
your own scripts performance. This is information only about a single frame, so you have to click anywhere
on the graph to select a frame. By default you’re able to expand some method calls only to level 1; enable
Deep Profile (button at the bottom) to browse the script calls more deeply. Be careful with deep profiling.
It requires a lot of memory and it makes your game run significantly slower.
• The spikes

Let’s talk about spikes. This is usual term for situations when your game fps is significantly dropping for a
split second. Why this is called a “spike”? That’s because in Profiler it looks just like a spike standing out the
ground.

Spikes are telling you that something caused,


in this exact moment, that your frame is computed longer than usual. And you should inspect it.
I will list most known cause of spikes:
➢ Garbage collector
If you know .NET platform well enough then most probably you’re aware of the garbage collector. If you
don’t, imagine that all the memory that is allocated by your game at the runtime must be freed at some
point, and this is also done at the runtime. This process is called garbage collection and it freezes your
game until it is finished. Usually it can take only a fraction of a second, but this is more than enough to
make your game feel laggy and unpleasant. The only way to target this issue is to prevent garbage
collection from happening. We will look deeper into this matter in another blog post.
➢ Scripts and expensive algorithms
Sometimes spikes will be generated by your own scripts. Maybe you’re doing too expensive operation that
could be optimized or should be done in separate thread.
➢ Background processes and operating system itself
Sometimes you may experience spikes that are not your game’s fault. These spikes are displayed with a
large amount of time assigned to Other category. This can be seen on operating systems that are allowing
too many apps to run in the background (desktop operating systems or Android). This is not something
that you can do about it, so kill as many background apps as possible, keep your Profiler open and observe
if the spikes will go away. You can always try profiling it on a different device.

7.5 Navigation
Have you ever wondered how the various NPCs (Non-Playable Characters) in a game move around the
game world, avoiding objects and even sometimes avoiding you, only to pop out from behind you to give
you a jump scare. How is this done so realistically? How do these so called bots decide which paths they
should take and which to avoid?
In Unity, this miracle (not so much once you know how it works) is achieved using NavMeshes.
NavMeshes or Navigational Meshes are a part of the Navigation and path finding system of Unity. The
navigation system allows you to create characters that can intelligently move around the game world,
using navigation meshes that are created automatically from your Scene geometry. Dynamic obstacles
allow you to alter the navigation of the characters at runtime, while off-mesh links let you build specific
actions like opening doors or jumping down from a ledge.
• Different Components of NavMesh
The Unity NavMesh system consists of the following pieces:
➢ NavMesh (short for Navigation Mesh) is a data structure which describes the walkable surfaces of the
game world and allows to find path from one walkable location to another in the game world. The data
structure is built, or baked, automatically from your level geometry.
➢ NavMesh Agent component help you to create characters which avoid each other while moving
towards their goal. Agents reason about the game world using the NavMesh and they know how to
avoid each other as well as moving obstacles.
➢ Off-Mesh Link component allows you to incorporate navigation shortcuts which cannot be represented
using a walkable surface. For example, jumping over a ditch or a fence, or opening a door before
walking through it, can be all described as Off-mesh links.
➢ NavMesh Obstacle component allows you to describe moving obstacles the agents should avoid while
navigating the world. A barrel or a crate controlled by the physics system is a good example of an
obstacle. While the obstacle is moving the agents do their best to avoid it, but once the obstacle
becomes stationary it will carve a hole in the navmesh so that the agents can change their paths to
steer around it, or if the stationary obstacle is blocking the path way, the agents can find a different
route.

7.6 Light Baking


Baked Lights are Light components which have their Mode property set to Baked. Use Baked mode for
Lights used for local ambience, rather than fully featured Lights. Unity pre-calculates the illumination from
these Lights before run time, and does not include them in any run-time lighting calculations. This means
that there is no run-time overhead for baked Lights. Baked Lights do not change in response to actions
taken by the player, or events which take place in the Scene. They are mainly useful for increasing
brightness in dark areas without needing to adjust all of the lighting within a Scene.
Baked Lights are also the only Light type for which dynamic GameObjects cannot cast shadows on other
dynamic GameObjects.

Advantages of baked lighting


• High-quality shadows from statics GameObjects on statics GameObjects in the light map at no
additional cost.
• Offers indirect lighting.
• All lighting for static GameObjects can be just one Texture fetched from the light map in the Shader
Disadvantages of baked lighting
• No real-time direct lighting (that is, no specular lighting effects).
• No shadows from dynamic GameObjects on static GameObjects.
• You only get low-resolution shadows from static GameObjects on dynamic GameObjects using Light
Probes.
• Increased memory requirements compared to real-time lighting for the light map texture set, because
light maps need to be more detailed to contain direct lighting information.
Unity also precomputes direct baked lighting, which means that light direction information is not available
to Unity at run time. Instead, a small number of Texture operations handle all light calculations for baked
Lights in the Scene area. Without this information, Unity cannot carry out calculations for specular and
glossy reflections.

7.7 Occlusion Culling


Now with games processing power was always a big problem, in real-time rendering there are at least
three performance goals we want to achieve, more frames per second, higher resolution, and more objects
in scene. Thus speed-up techniques and acceleration schemes are always considered a necessity.
occlusion Culling to optimize rendering and performance of your 3D Unity projects. Occlusion Culling
disables GameObjects that are fully hidden (occluded) by other GameObjects. This means CPU and GPU
time isn’t wasted rendering objects that will never be seen by the Camera. Because Occlusion Culling is
calculated in a pre-pass, it’s not a good fit for projects where Scene geometry is instantiated or generated
at runtime. An ideal use case for Occlusion Culling is a Scene with distinct zones or areas that are hidden
from each other by corridors, walls, mountains, or buildings.

• Static Occluders have either a Mesh or Terrain Renderer, are opaque, and do not move at runtime.
Examples include walls, buildings, and mountains. If using LOD groups with a GameObject designated
as a Static Occluder, the base LOD (LOD0) is used in the calculation.
• Static Occludees have any type of Renderer and do not move at runtime. Examples include level decor,
like shrubs, or any GameObject that is likely to be occluded. A GameObject can be both. Dynamic
GameObjects can be occluded but can’t occlude other GameObjects.

7.8 Mobile Game Optimization Tips

• Project configuration
There are a few Project Settings that can impact your mobile performance.

➢ Reduce or disable Accelerometer Frequency


Unity pools your mobile’s accelerometer several times per second. In the Player settings disable this if
it’s not being used in your application, or reduce its frequency for better performance.
➢ Disable unnecessary Player or Quality settings
In the Player settings, disable Auto Graphics API for unsupported platforms to prevent generating
excessive shader variants. Disable Target Architectures for older CPUs if your application is not
supporting them. In the Quality settings, disable needless Quality levels.
➢ Disable unnecessary physics
If your game is not using physics, uncheck Auto Simulation and Auto Sync Transforms. These will just
slow down your application with no discernible benefit.
➢ Choose the right frame rate
You can also adjust the frame rate dynamically during runtime with Application.targetFrameRate. For
example, you could drop below 30 fps for slow or relatively static scenes and reserve higher fps settings
for gameplay.
➢ Avoid large hierarchies
Split your hierarchies. If your GameObjects do not need to be nested in a hierarchy, simplify the
parenting. Smaller hierarchies benefit from multithreading to refresh the Transforms in your scene.
Complex hierarchies incur unnecessary Transform computations and more cost to garbage collection.
➢ Transform once, not twice
Additionally, when moving Transforms, use Transform.SetPositionAndRotation to update both position
and rotation at once. This avoids the overhead of modifying a transform twice.
If you need to Instantiate a GameObject at runtime, a simple optimization is to parent and reposition
during instantiation:
GameObject.Instantiate(prefab, parent);
GameObject.Instantiate(prefab, parent, position, rotation);
➢ Assume Vsync is enabled
Mobile platforms won’t render half-frames. Even if you disable Vsync in the Editor (Project Settings >
Quality), Vsync is enabled at the hardware level. If the GPU cannot refresh fast enough, the current
frame will be held, effectively reducing your fps.

• Assets
The asset pipeline can dramatically impact your application’s performance. An experienced technical artist
can help your team define and enforce asset formats, specifications, and import settings for smooth
processes.
Don’t rely on default settings. Use the platform-specific override tab to optimize assets such as textures
and mesh geometry. Incorrect settings might yield larger build sizes, longer build times, and poor memory
usage. Consider using the Presets feature to help customize baseline settings that will enhance a specific
project.

➢ Import textures correctly


Most of your memory will likely go to textures, so the import settings here are critical. In general, try to
follow these guidelines:
✓ Lower the Max Size: Use the minimum settings that produce visually acceptable results. This is non-
destructive and can quickly reduce your texture memory.
✓ Use powers of two (POT): Unity requires POT texture dimensions for mobile texture compression
formats (PVRCT or ETC).
✓ Atlas your textures: Placing multiple textures into a single texture can reduce draw calls and speed
up rendering. Use the Unity Sprite Atlas or the third-party TexturePacker to atlas your textures.
✓ Toggle off the Read/Write Enabled option: When enabled, this option creates a copy in both CPU-
and GPU-addressable memory, doubling the texture’s memory footprint. In most cases, keep this
disabled. If you are generating textures at runtime, enforce this via Texture2D.Apply, passing in
makeNoLongerReadable set to true.
✓ Disable unnecessary Mip Maps: Mip Maps are not needed for textures that remain at a consistent
size on-screen, such as 2D sprites and UI graphics (leave Mip Maps enabled for 3D models that vary
their distance from the camera).

➢ Compress textures
Use Adaptive Scalable Texture Compression (ATSC) for both iOS and Android. The vast majority of
games in development target min-spec devices that support ATSC compression.
➢ Adjust mesh import settings
Much like textures, meshes can consume excess memory if not imported carefully. To minimize
meshes’ memory consumption:
✓ Compress the mesh: Aggressive compression can reduce disk space (memory at runtime, however,
is unaffected). Note that mesh quantization can result in inaccuracy, so experiment with
compression levels to see what works for your models.
✓ Disable Read/Write: Enabling this option duplicates the mesh in memory, which keeps one copy of
the mesh in system memory and another in GPU memory. In most cases, you should disable it (in
Unity 2019.2 and earlier, this option is checked by default).
✓ Disable rigs and BlendShapes: If your mesh does not need skeletal or blendshape animation,
disable these options wherever possible.
✓ Disable normals and tangents: If you are absolutely certain the mesh’s material will not need
normals or tangents, uncheck these options for extra savings.

➢ Check your polygon counts


Higher-resolution models mean more memory usage and potentially longer GPU times. Does your
background geometry need half a million polygons? Consider cutting down models in your DCC
package of choice. Delete unseen polygons from the camera’s point of view, and use textures and
normal maps for fine detail instead of high-density meshes.

• Graphics and GPU optimization


With each frame, Unity determines the objects that must be rendered and then creates draw calls. A draw
call is a call to the graphics API to draw objects (e.g., a triangle), whereas a batch is a group of draw calls to
be executed together.
As your projects become more complex, you’ll need a pipeline that optimizes the workload on your GPU.
The Universal Render Pipeline (URP) currently uses a single-pass forward renderer to bring high-quality
graphics to your mobile platform (deferred rendering will be available in future releases). The same
physically based Lighting and Materials from consoles and PCs can also scale to your phone or tablet.
The following guidelines can help you to speed up your graphics.
➢ Batch your draw calls
Batching objects to be drawn together minimizes the state changes needed to draw each object in a
batch. This leads to improved performance by reducing the CPU cost of rendering objects. Unity can
combine multiple objects into fewer batches using several techniques:
✓ Dynamic batching: For small meshes, Unity can group and transform vertices on the CPU, then
draw them all in one go. Note: Only use this if you have enough low-poly meshes (less than 900
vertex attributes and no more than 300 vertices). The Dynamic Batcher won’t batch meshes larger
than this, so enabling it will waste CPU time looking for small meshes to batch in every frame.
✓ Static batching: For non-moving geometry, Unity can reduce draw calls for meshes that share the
same material. While it is more efficient than dynamic batching, it uses more memory.
✓ GPU instancing: If you have a large number of identical objects, this technique batches them more
efficiently through the use of graphics hardware.
✓ SRP Batching: Enable the SRP Batcher in your Universal Render Pipeline Asset under Advanced. This
can speed up your CPU rendering times significantly, depending on the Scene.

➢ Use the Frame Debugger


The Frame Debugger shows how each frame is constructed from individual draw calls. This is an
invaluable tool for troubleshooting your shader properties that can help you analyze how the game is
rendered.
➢ Avoid too many dynamic lights
It is crucial to avoid adding too many dynamic lights to your mobile application. Consider alternatives
like custom shader effects and light probes for dynamic meshes, as well as baked lighting for static
meshes.
➢ Disable shadows
Shadow casting can be disabled per MeshRenderer and light. Disable shadows whenever possible to
reduce draw calls. You can also create fake shadows using a blurred texture applied to a simple mesh
or quad underneath your characters. Otherwise, you can create blob shadows with custom shaders.
➢ Use Light Layers
For complex scenes with multiple lights, separate your objects using layers, then confine each light’s
influence to a specific culling mask.
➢ Use Light Probes for moving objects
Light Probes store baked lighting information about the empty space in your Scene, while providing
high-quality lighting (both direct and indirect). They use Spherical Harmonics, which calculate very
quickly compared to dynamic lights.
➢ Use Level of Detail (LOD)
As objects move into the distance, Level of Detail can adjust or switch them to use simpler meshes with
simpler materials and shaders, to aid GPU performance.
➢ Use Occlusion Culling to remove hidden objects
Objects hidden behind other objects can potentially still render and cost resources. Use Occlusion
Culling to discard them.
While frustum culling outside the camera view is automatic, occlusion culling is a baked process. Simply
mark your objects as Static Occluders or Occludees, then bake through the Window > Rendering >
Occlusion Culling dialog. Though not necessary for every scene, culling can improve performance in
many cases.
➢ Limit use of cameras
Each camera incurs some overhead, whether it’s doing meaningful work or not. Only use Camera
components required for rendering. On lower-end mobile platforms, each camera can use up to 1 ms
of CPU time.
➢ Keep shaders simple
The Universal Render Pipeline includes several lightweight Lit and Unlit shaders that are already
optimized for mobile platforms. Try to keep your shader variations as low as possible, as they can have
a dramatic effect on runtime memory usage.
➢ Minimize overdraw and alpha blending
Avoid drawing unnecessary transparent or semi-transparent images. Mobile platforms are greatly
impacted by the resulting overdraw and alpha blending. Don’t overlap barely visible images or effects.
➢ Limit Post-processing effects

➢ Be careful with Renderer.material


Accessing Renderer.material in scripts duplicates the material and returns a reference to the new copy.
This breaks any existing batch that already includes the material. If you wish to access the batched
object’s material, use Renderer.sharedMaterial instead.
➢ Optimize SkinnedMeshRenderers
Rendering skinned meshes is expensive. Make sure that every object using a SkinnedMeshRenderer
requires it. If a GameObject only needs animation some of the time, use the BakeMesh function to
freeze the skinned mesh in a static pose, then swap to a simpler MeshRenderer at runtime.
➢ Minimize Reflection Probes
A Reflection Probe can create realistic reflections, but can be very costly in terms of batches. Use low-
resolution cubemaps, culling masks, and texture compression to improve runtime performance.

You might also like