Topic of Seminar

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 42

ADVANCE NIGHT VISION SYSTEM

A
Seminar Report
submitted
in partial fulfilment
for the award of the Degree of
Bachelor of Technology
in Department of Mechanical Engineering

Supervisor Submitted By:


Mr. Dinesh Kumar Sharma Vijendra Choudhary
Assistant professor 14ESME117

Department of Mechanical Engineering

Swami Keshvanand Institute of Technology, Management & Gramothan, Jaipur

Rajasthan Technical University, Kota

May 2018

1|Page
Candidate’s Declaration
I hereby declare that the work, which is being presented in the Seminar, titled “ADVANCE NIGHT
VISION SYSTEM ”in partial fulfilment for the award of Degree of “Bachelor of Technology” in Department
of Mechanical Engineering, and submitted to the Department of Mechanical Engineering, Swami
Keshvanand Institute of Technology, Management & Gramothan, Jaipur is a record of my own investigations
carried under the Guidance of Mr .DINESH KUMAR SHARMA Department of Mechanical Engineering,
SKIT, JAIPUR

I have not submitted the matter presented in this report anywhere for the award of any other Degree.

……………………..

VIJENDRA CHOUDHARY

ROLL NO. 14ESKME117

SKIT,JAIPUR

Counter Signed by
Mr. DINESH KUMAR SHARMA
.....................................

2|Page
Swami Keshvanand Institute
of Technology, Management & Gramothan, Jaipur
Department of Mechanical Engineering

CERTIFICATE

This is to certify that VIJENDRA CHOUDHARY University Roll No.14ESKME117 of VIII Semester,
B.Tech (Mechanical Engineering ) 2017-18, has presented a seminar titled “ADVANCE NIGHT VISION
SYSTEM” in partial fulfilment for the award of the degree of Bachelor of Technology under Rajasthan
Technical University, Kota.

Date: 15-08-2018

Mr.ANKIT AGARWAL Mr.DINESH KUMAR SHARMA


(ASST. PROFESSOR) (ASST. PROFESSOR) Deparment of mechanical
Mr.DINESH KUMAR SHARMA (Supervisor)
(ASSI. PROFESSOR)
Department of mechanical
(Seminar Faculty)

3|Page
ACKNOWLEDGMENET
I take this opportunity to express my gratitude to Mr.DINESH KUMAR SHARMA,Assistant
professor who has given guidance and a light to me during this Seminar. His versatile knowledge about
“ADVANCE NIGHT VISION SYSTEM” has eased me in the critical times during the span of this
Seminar.
I am very grateful to our course faculties Mr. DINESH KUMAR SHARMA (Assistant
professor) and Mr ANKIT AGARWAL(Assistant professor) who analyzed my presentation and
suggest me to improve in my grey areas of my presentation.
I extend my sincere thanks towards Prof. N. C. Bhandari (Head, Mechanical Engineering
Department) for his kind support throughout my span of degree. I am also thankful to Prof. S. L. Surana
(Director - Academics) and Shri Jaipal Meel (Director) for their kind support.
I acknowledge here out debt to those who contributed significantly to one or more steps. I take full
responsibility for any remaining sins of omission and commission.

Vijendra choudhary
14EAKME117.
B.Tech IV Year
(Mechanical Engineering)

4|Page
ABSTRACT
Night vision is one of the major advancement in vehiclesafetysystems.Itenablesthebettervisibility of the
field in which vehicle is driven during the night time. Studies report that, there is only quarter of the all
travel by car drivers is undertaken a night drive, but still, 40% of the road accidents happens during night
time. This makes night vision system demanding for drivers assist during poor light or during night
time.The major reason for night accidents is poor visibility of the field of driving due to the limitation in
head light range and the dazzling of high beam headlight from the vehicle that approaches from the
opposite direction. Though night vision system available in the market minimizes the occurrence and
consequences of automobile accidents, it is not 100% efficient for the ease and pleasure of driving for
the old aged drivers. Since its display is limited to a small screen which provides only a monochrome
output, the driver doesn't tent to depend on night vision all the time With this paper work we are trying to
highlight the advancement of night vision which can convert the present monochromatic display to a
colourised one and help driver with a better assist

5|Page
TABLE OF CONTENTS

Certificate .................................................................................................................................3
Acknowledgement.................................................................................................................4
Abstract......................................................................................................................................5
Chapter1: INTRODUCTION…………………………………..

1.1 NIGHT VISION SYSYTEM ………………………………………………………………1

Chapter 2: NIGHT VISION SYSTEM IN AUTOMOBILE..................................................................

2.1 INFRARED PROJECTORS................................................................................................

2.2 NIGHT VISION CAMERA.................................................................................................

2.3 IMAGE INTENSIFIER.......................................................................................................

2.4 INFRARED SENSORS......................................................................................................

2.5 NIGHT VISION PROCESSING

Chapter 3: WORKING OF AUTOMOTIVE NIGHT VISION SYSTEM

Chapter 4: ADVANCEMENT IN NIGHT VISION SYSTEM

4.1 PEDESTRIAN DETECTION SYSTEM.........................................................................

4.1.1 CHARACTERSTICS OF IR DOMAIN........................................................................

4.1.2 WORKING OF PEDESTRIAN DETECTION SYSTEM........................................

4.1.3 PEDESTRIAN DETECTION ALGORITHM............................................................

4.1.3.1 CONTOUR BASED CANDIDATE AREA EXTRACTION.................................

4.1.3.2 CANDIDATE AREA CLASSIFICATION...............................................................

4.1.3.3 CANDIDATE AREA TRACKING.............................................................................

4.2 INTELLIGENT VISION FOR AUTOMOBILES AT NIGHT....................................

4.2.1 WORKING OF IVAN.......................................................................................................

4.3 TRUE COLOR NIGHT VISION........................................................................................

4.3.1 DESCRIPTION OF CAMERAS.....................................................................................

4.3.1.1 LIQUID CRYSTAL FILTER INTENSIFIED CAMERA......................................

6|Page
Chapter 5: APPLICATIONS.....................................................................................................................

Chapter 6: CASE STUDY............................................................................................................................

Chapter 7: CONCLUSION...........................................................................................................................

Chapter 8: REFERANCE………………………………………………………………………………………..

Chapter 9: BIBLIOGRAPHY …………………………………………………………………………………..

7|Page
LIST OF FIGURES
Figure2.1 Infrared Projector

Figure 2.2: Night vision camera.

Figure2.3: Photon multiplying phenomenon of the photon received in image intensifier

Figure2.4: The figure demonstrates the path of one electron multiplying

Figure2.5: Infrared sensor.

Figure2.6: Night vision processing unit

Figure3.2: Circuit diagram of Night vision system.

Figure4.1: Automotive Pedestrian detection system

Figure 4.2: Flow chart of pedestrian detection algorithm

Figure 4.3 Grouping of body part area using disparity information


Figure4.4: System Overview of IVAN
Figure4.6 The IVAN system
Figure4.7 Adaptive infrared camera
Figure 4.9 monochrome and colour low light level imagery.
Figure 4.10 Image taken with TCNV camera demonstrating the ability

Figure4.11: A liquid crystal filter shown in 3 different colour

Figure 4.12 TCNV prototypes with LC filter and image intensified CMOS

8|Page
INTRODUCTION

1.NIGHT VISION SYSTEM:

Night vision system is the technology developed for the clear visibility of the field of an object during
the night time or under poor light. Night vision technology was first
developed for the military activities. Later on the technology was adopted in
commercial purpose such as for automobiles and aircrafts do also, anything that is alive uses energy,
and so do many inanimate items such as engines and rockets. Energy consumption generates heat. In
turn, heat causes the atoms in an object to fire off photons in the thermal-infrared spectrum. The hotter
the object, the shorter the wavelength of the infrared photon it releases. Thermal imaging takes
advantage of this infrared emission. An object that is very hot will even begin to emit photons in the
visible spectrum, glowing red and then moving up through orange, yellow, blue and eventually white.
These reflected and emitted radiations which come under the infrared regions are detected by IR
sensors and cameras to generate a monochromatic image that gives a better visibility of the field of
view during low light. Presently, there are two types of Night Vision technologies on the market, Far
Infrared (FIR) and Near Infrared (NIR). As stated above, FIR detects the radiation which all objects
emit, while NIR detects the reflected illumination in a frequency just outside the visible range of a
human being. This paper will analyse the requirements of a Night Vision system, how NIR and FIR
today perform under the defined condition and proceed to discuss directions for future development.

9|Page
2.NIGHT VISION SYSTEM IN AUTOMOBILE:
All cars today have an acceptable ‘night vision’ system. That is, the high beam headlights of the
vehicle. Even though they could be improved, their performances are at least acceptable. However, in
many areas, high beams are of very limited use due to oncoming traffic. The insufficient night-time
visibility originates in the fact that the high beam headlights are rarely possible to use. A Night Vision
system must therefore be a system that increases visibility in situations where only low beam
headlights can be used. Studies report that, there is only quarter of the all travel by car drivers is
undertaken a night drive, but still, 40% of the road accidents happens during night time. This makes
night vision system demanding for drivers assist during poor light or during night time. The major
reason for night accidents is poor visibility of the field of driving due to the limitation in low beam
head light range and the dazzling of high beam headlight from the vehicle that approaches from the
opposite direction. Though night vision system available in the market minimizes the occurrence and
consequences of automobile accidents, it is not 100% efficient for the ease and pleasure of driving for
the old aged drivers. This condition thus define the importance and need to implicated technologies for
the safety of pedestrians during the night time, and better aid for the driver to understand his field of
view at a comfortable level. The short detection distances for especially dark objects under low beam
conditions versus the corresponding situation under high beam condition illustrate the detection
distance deficiency that a Night Vision system should overcome. Safe driving speed should allow the
driver to detect, react and stop in time before any obstacles on the road. However, most motorists
actually drive faster than the visibility range allow with low beam headlights. The present night vision
system used is automobile is a combination of NIR with image intensifier and FIR with thermal
imaging. The night vision system uses an infrared projector, a camera, a processing unit and a display.

2.1 INFRARED PROJECTORS


Night vision system works on the principle of infrared rays. Infrared rays are invisible light rays which a
human eye cannot capture. These infrared rays are generated using infrared LEDs and infrared laser
beams. The LEDs are used for NIR system while the infrared laser is used for FIR to get a long range
view. Now, in modern cars with night vision system, the infrared projectors are attached along with
the head lights rather than being placed separately.

Figure2.1 Infrared Projector

10 | P a g e
2.2 NIGHT VISION CAMERA
The present night vision cameras used in automobile applications are very compact and easy to
accommodate. Some car manufacturers make a built in night vision system in their cars. But some
offer it as an added choice for the customer.

Figure 2.2: Night vision camera.

Like normal DSLR cameras, the night vision camera consist of a lens part often
known as image intensifier and a photon detecting sensor which can sense IR
radiation.

2.3 IMAGE INTENSIFIER:-


Image-enhancement technology is what most people think of when you talk about night vision. In
fact, image-enhancement systems are normally called night-vision devices (NVDs). NVDs rely on a
special tube, called an image-intensifier tube, to collect and amplify infrared and visible light. In night
vision system, a conventional lens, called the objective lens, captures ambient light and some near-
infrared light. The gathered light is sent to the image-intensifier tube. The image-intensifier tube has a
photocathode, which is used to convert the photons of light energy into electrons. As the electrons
pass through the tube, similar electrons are released from atoms in the tube, multiplying the original
number of electrons by a factor of thousands through the use of a micro channel plate (MCP) in the
tube. A MCP is a tiny glass disc that has millions of microscopic holes (micro channels) in it, made
using fibre-optic technology. The MCP is contained in a vacuum and has metal electrodes on either

11 | P a g e
side of the disc. Each channel is about 45 times longer than it is wide, and it works as an electron
multiplier. When the electrons from the photo cathode hit the first electrode of the MCP, they are
accelerated into the glass micro channels by the 5,000-V bursts being sent between the electrode pair.
As electrons pass through the micro channels they cause thousands of other electrons to be released in
each channel using a process called cascaded secondary emission. Basically, the original electrons
collide with the side of the channel, exciting atoms and causing other electrons to be released.
These2016 new electrons also collide with other atoms, creating a chain reaction that results in
thousands of electrons leaving the channel where only a few entered. An interesting fact is that the
micro channels in the MCP are created at a slight angle (about a 5-degree to 8-degree bias) to
encourage electron collisions and reduce both ion and direct-light feedback from the phosphors on
the output side. At the end of the image-intensifier tube, the electrons hit a screen coated with
phosphors. These electrons maintain their position in relation to the channel they passed through,
which provides a perfect image since the electrons stay in the same alignment as the original photons.
The energy of the electrons causes the phosphors to reach an excited state and release photons. These
phosphors create the green image on the screen that has come to characterize night vision. The green
phosphor image is viewed through another lens, called the ocular lens, which allows you to magnify
and focus the image. The NVD may be connected to an electronic display, such as a monitor, or the
image may be viewed directly through the ocular lens.

Figure2.3: Photon multiplying phenomenon of the photon received in image intensifier

12 | P a g e
Figure2.4: The figure demonstrates the path of one electron multiplying through one of the millions
of channels in the MCP. With each bounce the electron multiplies and accelerates

2.4 INFRARED SENSORS


Infrared sensor is the device that detects the infrared radiation in the light band. These
sensors are made out of silicon material which gets excited when the infrared rays
falls over it. These sensors can sense the infrared rays ranging from the wavelength
700 nanometre to 1 millimetre. They can sense both NIR and FIR and provide varying
electric signal for the detected photons of varying wavelength. These electric signals are
amplified and processed to generate graphic signals and displayed on an output device.

Figure2.5: Infrared sensor.

2.5 NIGHT VISION PROCESSING UNIT


The night vision processing unit is the main part of a night vision system, it process the
signal obtained from the infrared sensors to the digital visual signal. The night vision processing unit
determines the conditions of the field and do the required amplification of the signal to give a better
output. Now there are more complicated control unit which can perform a verity of functions such as
reducing the noise level in the output, spotting height intensity lights in the field of ride and screening

13 | P a g e
them to not to cause bright spot in the display screen etc.
The advanced night vision processing unit work along with the other safety and driver
assist systems available in automobiles to provide an intelligent night vision system.

Figure2.6: Night vision processing unit

14 | P a g e
3. WORKING OF AUTOMOTIVE NIGHT VISION SYSTEM

In car night vision system, during low light, the infrared projectors project the IR rays on
the field of driving. The infrared LEDs emits photons towards the field, these rays are
reflected by the surrounding. These reflected rays are captured by the night vision camera
in the car and is detected by IR sensors. The signal is then converted to image signals and
which is displayed through the display unit.

Figure 3.1

Figure3.2: Circuit diagram of Night vision system.

15 | P a g e
4. ADVANCEMENT IN NIGHT VISION SYSTEM

Over the years the automotive night vision technology has evolved a lot, now the
night vision is an intelligent vision system which can act upon base on the situation
and inform the driver like a co -driver. The influence of power electronics and
navigation has made the night vision system more comfortable for night drive.

4.1 PEDESTRIAN DETECTION SYSTEM:-

The capability of observing the world through visual information is a strong


requirement for future driver assistance systems since their dues are getting more
complex. Especially, driver assistance systems dedicated to reduce the number of
fatalities and severities of traffic accidents impose several requirements on the
sensorial system. One of the major and challenging tasks is the detection and
classification of pedestrians.

Naturally, the use of visual cameras is a promising approach to cope with the demands of
pedestrian detection. Several different image processing methods and systems have been
developed in the last few years, including shape-based methods, texture and template
based methods, stereo, as well as motion clues. But none of this is efficient in detecting
pedestrian during the night time as the works based on the light.

In order to facilitate the recognition process and to enable the detection of pedestrian in
dark environments passive infrared ( ) cameras have come into focus. Some first
pedestrian detection systems for IR images IR and videos have been developed
demonstrating the potential and benefits that IR cameras can provide.

Pedestrian detection using IR rays uses FIR or thermal infrared detection to identify the
pedestrian or animal in the field. Every living thing and working engines generates energy
in the form of heat radiations. During the night time, the non-living thing in the
surrounding environment stays cool. This provides suitable condition for the thermal

16 | P a g e
image sensors to detect the source that emit heat radiation.

4.1.1 CHARACTERIZATION OF IR DOMAIN:-


Images in the IR domain convey a type of information very different from images in the visible
spectrum. In the IR domain the image of an object relates to its temperature and the amount of
heat it emits but is not affected by illumination changes. Generally, the temperature of people
is higher than the environmental temperature and their heat radiation is sufficiently high
compared to the background. Therefore, in IR images pedestrians are bright and
sufficiently contrasted with respect to the back-ground, thus making IR imagery suited to
their localization. Other objects which actively radiate heat (cars, trucks etc.) have a
similar behaviour; however people can be recognized thanks to their shape and aspect
ratio.
One major point in favour of IR cameras is the independency to lighting changes: IR
cameras can be used in day-time or night-time with no or little difference extending vision
beyond the usual limitations of day-light cameras. Moreover, the absence of colours or
strong textures eases the processing towards interpretation. Furthermore, the problem of
shadows is greatly reduced.

4.1.2. WORKING OF PEDESTRIAN DETECTION SYSTEM:-

The main deal with the pedestrian detection system is to identify the presence of
pedestrians or animal nearby the field of driving and to predict and inform and warn the
drive based on the behaviour of the identified object. For this, a serious of processing and
calculations are been done by the night vision control unit to determine the position,
behaviour and size of object. All these things are been done with the help of real time
image processing.
Since the vehicle is in continuous movement, tracking the position of the detected
object is a bit complicated task. For this the image processing unit uses multiple frames of
images at an in travel of time, and relates with the vehicle seed to determine the relative
position of the object.
The ratios of the polar coordinates of the detected images at consecutive intervals
determine the size and type of the identified object. When an object is detected, a

17 | P a g e
bounding box appears on the screen to indicate the position of it in the output screen.

Figure4.1: Automotive Pedestrian detection system

Figure4.2
4.1.3. PEDESTRIAN DETECTION ALGORITHM

The process flowchart of the newly proposed pedestrian detection algorithm is


shown in. In many cases of night time pedestrian detection, processing methods
based on image linearization are used because the intensity (i.e. temperature) of
pedestrians is higher than that of background objects. However, at daytime or in
bad weather, making assumptions on the intensity is not always effective because
18 | P a g e
of environmental influences on FIR images

Figure 4.2: Flow chart of pedestrian detection algorithm

Table 2: Features of FIR images

4.1.3.1 Contour-based candidate area extraction:-


19 | P a g e
The method of contour-based candidate area extraction uses the intensity
difference between a pedestrian and the background and a constrained condition of
distances to pedestrian body parts. The constrained condition is based on the
assumption that the distance between FIR camera and each pedestrian body part
(head, arms, torso, and legs) is the same. However, the contour of a pedestrian is
not always a continuous line and is usually disconnected at every parts of the body.
Therefore, the candidate area extraction method is made of two steps:
(1) Extraction of body part areas, and
(2) Grouping of body part areas, as shown below.
First, the contours of the pedestrian and the back-ground are extracted from the FIR image (). In
consideration of the cases of bad weather where blurred images are obtained, the Prewitt operator is
used so that smooth contours can be extracted Next, neighbouring contour points are connected and
contour groups are constructed The reason of this process is to prevent the occurrence of errors at the
following disparity segmentation process. The constructed contour groups may include the contour
points that belong to different objects, so a contour group is divided into several blocks (ex. 4x4 pixels,
and then reconstructed after being judged whether or not these blocks belong to the same object. In the
judgment process, each block’s disparity (depends on the distance from FIR camera, as shown in) is
calculated by using stereo cameras, and the blocks subjected to below conditions are classified into
the same group. Disparity difference between the blocks is within a certain range Blocks belonged to
the same contour group before being divided. The last step of body part area extraction is the
expansion of contour groups. Contour points may not always be extracted on all boundaries between
pedestrian and background, so the areas of contour groups need to be expanded to the areas that
belong to pedestrian and do not involve contour points. The expansion process has three steps:
a) Set the blocks around the contour group area.
b) calculate the disparity of each block
c) Unite the block with the contour group if the block has the disparity that is nearly equal to that
of the contour groups.
After the pedestrian body part areas were extracted, the body part areas are grouped to extract the
candidate area, which is equal to the entire body of the pedestrian. This process has two steps: Unitizing
and proving. In the unitizing process, the pedestrian body part areas of equal disparity are unitized and
the candidate area is generated. Next, in the proving process, the spatial disparity among the body part
areas in the candidate area are calculated, and judged whether or not it is equivalent to the spatial
20 | P a g e
disparity of the surrounding areas. The reason this process is performed is that some candidate areas
may consist of “several” objects which are adjacent to each other and have disparities that are nearly
equal. Therefore, if the spatial disparity of the body part areas is different from that of the
surrounding areas,

Figure 4.3 Grouping of body part area using disparity information

21 | P a g e
4.1.3.2 Candidate area classification:-

Candidate area classification is a process of judging whether or not the extracted


candidate area is a pedestrian. In order to reduce the occurrence of classification
errors due to occlusion or video noise, this process consists of two steps: The
current frame classification and the time-series classification, as shown below.
(a) Classification in current frame
First, each candidate area is divided into several body part areas, such as head and legs,
and the “plausibility” is calculated for each body part area. The candidate area is
judged whether or not it is a pedestrian using the calculated plausibility. Then, in
consideration of the variation of images due to weather and time of the day, the
judgment is performed in accordance with environmental conditions. For example
when extracting a head area at night time or in bad weather, image binarization is used
because a head usually has a higher intensity than the background. However, this
method is not useful in the daytime because the sun heats the background and the
intensity of the head area decreases. Therefore, the head area extraction using image
binarization is performed only at night time or in bad weather, and at daytime the head
contour consisting of contour points is used. Parameters such as binarization threshold are
derived statistically in consideration of environmental conditions.
(b) Time-series classification
This process uses the results of classification in current and past frames, using the
tracking process. A candidate area is judged as a “pedestrian” only when the ratio of the
frames the candidate area is judged as a “pedestrian” in total frames exceeds a certain
value.

4.1.3.3 Candidate area tracking

Candidate areas are tracked over time so that candidate area classification can be performed.
In the candidate area tracking process, the similarity between the candidate areas in the current

22 | P a g e
and last frames is calculated. If the similarity is larger than a certain level, then these candidate
areas are labelled as the same. In calculating the similarity, parameters such as candidate area
size variation and gravity difference are used. In addition, when the difference of gravity is
calculated, the coordinates of the candidate area are corrected by calculating the yaw and pitch
angles of the car.

4.2 INTELLIGENT VISION FOR AUTOMOBILES AT NIGHT (IVAN)

Car driving is a process of which the safety heavily relies on drivers’ accurate visual
information processing and proper reactions. Objects such as road signs, warnings and
lane lines are critical for aiding drivers to understand the road conditions Failures in
recognizing these objects may cause serious consequences. Practically drivers may
experience more difficulties in identifying these objects during the night driving, leading
to a much higher probability of traffic accident. Statistics shows that, more than 20% of
fatal traffic accidents occurred between midnight and 6:00 in the morning, which accounts
for only 2.4% of total traffic volume. Besides the drivers’ low illumination caused by
factors such bad weathers, obscure street lamps and limited range of headlights is also a
major reason for this situation. For example, dipped headlights only illuminate about 56
meters when the breaking distance at 100 lacking of attention, largely reduced visual
acuity and field of vision at night due to km/h is about 80 meters. Facing this problem,
attentions have been attracted to the research of automobile night vision systems which
help to improve the visibility of objects on the road at night. In general, such a system is
equipped with night visors such as infrared cameras from which the information of objects
presenting on the road, such as bends, poles, pedestrians, other cars etc. can be extracted.
Then, this system will inform drivers by means of visual, acoustic or other signals about
the obstacles appearing in their way. Some of the research results have been transformed
into real products installed on high-end automobiles such as BMW 6 Series Coupe and
Mercedes-Benz 2007 S-Class series. Intelligent Vision for Automobiles at Night (IVAN), is
a highly advanced form of night vision system, which focuses on detecting, illuminating and
recognizing road signs at night. Infrared cameras are adopted to tackle the problem of low
visibility at night. Computer vision techniques, such as image enhancement, object
detection and recognition etc. , are used intensively in IVAN to analyse videos captured by

23 | P a g e
the infrared cameras. Road sign detection and recognition functions are implemented to
reduce the probability of missing traffic signs in dark environments. The system can be
operated by the driver through a touch screen and audio notifications are used for
informing the driver of the possible dangers.

Figure4.4: System Overview of IVAN

Unlike normal cameras, the infrared cameras are sensitive to infrared and, therefore, it captures
objects that reflect infrared. Figure 2 compares the images captured by an infrared camera and a
common webcam in the same night driving scenario. The analogue video signals are first encoded
using a TV capture card. Then, the video is enhanced and pre-processed for later stages. The
enhanced image is ready for shape detection which locates possible road signs in the video frames.
All the detected shapes will be sent to road sign recognition module to check whether they
correspond to the known road signs stored in the database. If a road sign is recognized, it will be
displayed on the screen. At the same time, IVAN will alert the driver when an important road sign,
such as a danger warning, is found. The detected shape will be displayed on the screen so that the
driver will be able to move the spotlight to illuminate the corresponding area.

24 | P a g e
Figure4.5: Images from different cameras
4.2.1 WORKING OF IVAN
The road sign detection module locates and segments potential road signs in real-time Based on the
observation that most of the road signs are in regular geometric shapes, such as rectangle, triangle
and circle, the following steps are used for road sign detection in IVAN. The input image is first
processed to reduce the noise by using a 5x5 Gaussian filter. Shades of grey are then converted to
black and white (binarization using different thresholds. For each segmented image thus obtained,
contours of the white regions are extracted. The contours are approximated into polygons by using
Douglas-Peucker algorithm, which recursively find out a subset of vertices that the shape enclosed is
similar to the original one. The resultant polygons approximated are further analysed: In order to
improve detection speed and accuracy, they are classified “triangles” by polygons' vertex number

Figure4.6 The IVAN system

the detected shape by checking their interior angles. For quadrilaterals, the interior angles should be
within the range 90 degrees; for triangles, the interior angles should be within the range 60 degrees. The
parameters are constants which are defined to offer tolerances to deal with the perspective distortion
and noises in the frame captured. Shapes will be discarded if they do not have three/four vertices
respectively or their interior angles violate the rules defined above. Consequently, a set of

25 | P a g e
quadrilaterals and triangles are detected, these shapes are regarded as traffic signs and recorded by
the tracking algorithm of the detection module. For round road signs, after the contours are
extracted, the program verifies the detected contours by matching their shapes with the ellipse
computed. If more than a half of the points are matched locally, the candidate ellipse becomes
verified. During the process of extraction, a geometric error is tolerated for each point. The degree of
the toleration varies adaptively on the size of each ellipse. Figure4 illustrates the ellipse verification
process In order to stabilize the detection result while minimizing the false acceptance rate; a tracking
mechanism is employed to follow the road signs detected in the captured videos. A circular buffer is
created for each traffic sign successfully detected, the bounding rectangle and center point are recorded
in the corresponding circular buffer. In the next frame, when a shape detected in similar location, the
same circular buffer will be used, and its bounding rectangle and centre will be updated. Only the shapes
that appear in more than 5 times in 10 consecutive frames are considered as

“Successful detections” and


display onto the screen.
Consequently, erroneous
detections will be eliminated,
since they cannot be detected
in consecutive frames.

Figure4.7 Adaptive infrared camera

The night vision feature is implemented by utilizing an infrared camera to capture


the front view. Since infrared camera has strong sensibility against infrared, the
captured images enable drivers to see the road conditions and identify road signs
or other objects at night. Inspired by BMW 7 Series’ Adaptive Headlights System,
an adaptive control mechanism is implemented by estimating an adjustment angle
from the vehicle’s speed and turning angles. Figure 8 illustrates the usage of
camera adjustment.

26 | P a g e
Figure 4.8

4.3. TRUE-COLOUR NIGHT VISION

Numerous studies have shown that scene understanding, reaction time, and object identification is
faster and more accurate with colour imagery than with monochrome imagery. Considering
surveillance, reconnaissance, and security applications, colour imagery has two main benefits over
monochrome imagery. The first is that colour improves contrast, which allows for better scene
segmentation and object detection. This contrast improvement can apply to both true-colour and false-
colour images, where false-colour imagery can be formed by the fusion of images from cameras with
different spectral sensitivity (e.g., image intensified with thermal IR). The second benefit of colour is
that it provides more information. Access to stored colour knowledge in the brain or a computer
database can be utilized to enable better object identification and scene understanding. This second
improvement applies primarily to true-colour images, since false-colour images do not necessarily
match the stored colour information, and may in fact be detrimental in this regard.
General benefits and drawbacks of true-colour night vision (TCNV) systems are listed in Table 1, and
examples of the utility of true-colour information are shown in Figure 1. For example, Figure 1
demonstrates that successfully finding the man with the orange shirt, determining the difference
between flags, or being able to pick out the blue car are all tasks that benefit greatly from the
additional information that true-colour imagery provides.
To obtain true-colour images a camera must be sensitive to the visible portion of the electromagnetic
spectrum and there must be a mechanism to filter or split the different parts (i.e., colours) of the
visible spectrum so that colour information can be extracted. This need to filter the input has the
consequence of reducing the available signal to a detector, which is the primary drawback of a true-

27 | P a g e
colour system intended for use in low-light situations. Furthermore, standard monochrome image-
intensified systems are typically designed to take advantage of the relatively high near-infrared (NIR)
signal available from the night sky. To mitigate the inherent reduction in signal due to filtering, a true-
colour system should also be able to utilize this NIR light. In addition, sensitivity to NIR is also
needed for viewing of IR laser aiming devices, as demonstrated in Figure 2. The ability to produce
true-colour content, while maintaining sensitivity to NIR is one of the inherent challenges in making a
viable true-colour night vision camera.
New camera technology and image processing routines have been developed to enable the use of
true-colour information from the visible portion of the spectrum while utilizing the full visible to
near infrared (V-NIR) range (roughly 400 to 1000 nm in wavelength) for the brightness information.
Two different types of TCNV cameras are there; one camera uses a liquid crystal filter in front of an
image intensified detector and the other uses a mosaic filter deposited on the pixels of an EMCCD
detector. Both cameras are based on new technologies: the liquid crystal camera uses fast switching
filters with optimized transmission bands, and the mosaic filter camera relies on recent advances in
CCD technology

Figure 4.9 monochrome and colour low light level imagery.

28 | P a g e
Figure 4.10 Image taken with TCNV camera demonstrating the ability to producecolourimagerywhile
utilizing both visible and NIR signal for brightness. The bright spot on the redcar from an NIR laser
aiming device

4.3.1 DESCRIPTION OF CAMERAS

4.3.1.1LIQUID CRYSTAL FILTER INTENSIFIED CAMERA

Liquid crystal (LC) filters consist of stacks of polarizing, bi-refrainment, and

variable retardance substrates. With applied voltages, the transmission of the

stack can be electronically switched to a different band pass or “colour” state (see

Figure 3). A full colour image is constructed by using separate images taken in 3

or 4 different colour states and then mixing them with appropriate weights to

form an RGB output image. Although the colour information is built up over

multiple exposures, the image is updated with each

captured frame, rather than waiting until a complete set of 3 or 4 frames is captured In
addition to the visible wavelengths, the LC filters also pass NIR radiation to increase the
available signal and to enable viewing of IR laser aiming devices. With the use of
specifically tailored band pass states and optimized colour mixing algorithms, the NIR
signal contributes to the brightness of an image without destroying the true colour
information.
29 | P a g e
Figure4.11: A liquid crystal filter shown in 3 different colour

Fast Switching Liquid Crystal Filter Camera

Benefi
Ts Drawbacks

Time-sequential image capture:


Full colour resolution at each pixel. it
takes multiple frames to produce
Filter can be positioned out of a
optical path for full detector full colour image.

sensitivity at lowest light levels. Reduced signal: filters rely on


Versatile: filter can be used with polarization, which leads to an
any type of VNIR low-light overall reduction in signal of
approximately 50% — (the
detector. average
transmission is less than 50% in
Low power. the
visible, but higher than 50% in

No moving parts. the


Fast-switching LC: no “dead-time” NIR)
Table : lists the main benefits and drawbacks of a night vision cameras that use such

The latest LC filters are extremely fast switching taking less than 1ms to switch between any

30 | P a g e
two states. Fast switching enables the camera to operate without “dead time” and the
associated light loss while the filter is in an undefined state. With typical LC filters it is
impractical to operate at video rates, i.e., 30 frames/second (fps), since the dead-time is on
180 fps are routinely used True colour night vision cameras use an image intensified CMOS
detector with a“smart camera” digital media processor (DMP). The image intensifier is a Gen
III blue-enhanced tube, which is bonded to the sensor via a 2:1 fibre-optic reducer. The
CMOS array is a ½” format 640×480 pixel detector capable of
200 fps at full resolution. A high frame rate detector is used to enable a reduction
in the image blur associated with time-sequential image capture; however, at the
lowest light settings, longer exposure times (and thus lower frame rates ~ 30 fps)
are used.

Figure 4.12 TCNV prototypes with LC filter and image intensified CMOS

31 | P a g e
5. Applications

These are the common applications of night vision technology


1.Military
2.Hunting
3. Security
4. Navigation
5. Wildlife observations
6. Hidden object detection

The original purpose of night vision was to locate enemy target at night. It is extensively by
the military for that purpose, as well as for navigation and targeting. Police and security
often use both thermal imaging and image enhancement technology, particularly for
surveillance. Hunter are use this to detect the animals and any other birds. Detectives and
private investigator use night vision to watch assigned to track.
Many business have permanently-mounted cameras equipped with night vision to monitor
surroundings. A real amazing ability of thermal imaging, is that it reveals whether an has been
distributed , it can show that the ground has been dug up to bury something, even if there is no
obvious sing to the naked eye. Law enforcement has used this to discover items that has been hidden
by the criminal, including money, drugs and bodies. Also recent changes to area such as walls can be
seen using thermal imaging, which have provided important clues in several cases. Many people are
beginning to discover the unique world that can be found after darkness falls

32 | P a g e
6.CASE STUDY

INTERNATIONAL CASE STUDIES

Figure 6.1

introduced in 2005 on the BMW 7 Series (E65). This system processes far infrared
radiation, which minimizes non-essential information placing a greater emphasis on
pedestrians and animals, allows for a range of 300 meters or nearly 1,000 feet, and
avoids "dazzle" from headlights, road lights and similar intense light sources.
2008 update added system on the redesigned BMW 7 Series (F01) , which flashes
action symbol on the navigation/information screen and automotive head-up display
when it detects pedestrians. 2013 update added Dynamic Light Spot .2013 update
added . The system provides a real-time video image that also depicts on the Control
Display persons, animals and other objects emitting heat when they are outside of the
light beam and warns in the event of an impending collision. The Dynamic Light Spot
is produced by a special headlight that directs the light beam onto the recognised
persons or animals respectively, thus drawing the driver’s attention to possible hazards
in good time. As soon as the remote infrared detects pedestrians or larger animals on
course for collision in the dark, the system directs two separately controlled Dynamic
Light Spots at them without creating an unpleasant glare. In the event of an acute risk,
an acoustic warning signal is also sounded and the brakes are set to maximum standby.
For the model year 2014, BMW 5-series will also have these new features.

MERCEDES-BENZ

33 | P a g e
Figure6.2
Series production Night View Assist system introduced in 2005 on there designed
Mercedes-Benz S-Class (W221). It was the first system to use the instrument cluster's
LCD as a display2009 added a pedestrian detection function calling the Revised system on
there designed Mercedes-Benz E-Class (W212) and refreshed S-class however, the E-class
uses the navigation screen's display.

2011: Night View Assist Plus with Spotlight Function premiere: the Mercedes-Benz
CL-Class (C216) became the first series production car with night

Figure 6.3

Mercedes-Benz S-Class (W221). Mercedes -Benz has unveiled an auxiliary spotlight


feature for its so-called Active Night View Assist headlamps to provide what it
describes as “an enhanced level of pedestrian safety”.
Until now Active Night View Assist has used an infra-red camera to record ghostly video
of pedestrians within a pre-determined field ahead of the car and subsequently play it in
real time on a monitor within instrument binnacle - thus alerting the driver to a potential
safety hazard at night or in low light conditions.
34 | P a g e
The new feature, which is designed to work at speeds above 45kph, sets out to provide not
only the driver but also pedestrians with an enhanced warning by employing a spotlight to
illuminate the area where the camera detects their presence. The spotlight feature relies on
the existing infra-red camera mounted within the headlamp assembly to detect pedestrians
at distances of up to 80 metres and uses the main beam function of the headlamps to light
up the immediate area where they are detected. Depending on the existing speed,
pedestrians can be illuminated up to four times before the car arrives .

Figure 6.4

35 | P a g e
A second camera mounted within the windscreen, where it also assists the functions
for Mercedes’ Speed Limit Assist and Lane Keeping Assist, records the position of
other cars and determines whether it is safe to illuminate the area where pedestrians
are detected. If the headlamps are set to dipped beam, the pedestrian is illuminated
with the spotlight function beyond the field of the dipped beam.

Figure 6.5

36 | P a g e
7. CONCLUSION

Automotive Head up display is an emerging technology which has many advantages on


the ergonomic aspects as well as for the comfort of driver. Researches are going onfor the
development of HUD(HEAD UP DISPLAY) to minimise the space of the central console
and displaying all the necessary information in the windshield itself. But HUD has some
limitations, that for HUD, it requires a partial reflecting element to reflect the projected
image in the windscreen to act as a screen, also the projector must be arranged with
projection angle above the critical angle of the glass to reflect it image. Another, one main
problem is that HUD cannot provide a better display during the day time. The background
light is so high that the projected image won't be properly seen. This become challenging
and limits the display area of HUD to a small portion of the windshield.
But, HUD is more preferable for the night drive. During night, except the high beam of
approaching vehicles, high intensity lights are lesser. So HUD can work well for the night
vision. Presently, in night vision technology, after spotting a human or animal in the field of
driving, the information is displayed on the small screen on the central consol. This is
ergonomically, not completely satisfactory for the drive, as he need to take away his eyes
from the road to have a look at the screen. So, mostly drivers won't relay on the night vision
all the time. With the use of holographic glass projection technology the vision system can
be developed to a next generation, with the combination of IVAN technology and
pedestrian detection with holographic projection the exact position, size and type of the
object detected can be show directly on the windshield glass were the driver see through.
The high intensity holographic laser projection can display the symbols detected by the
IVAN as well as the road markings on the wind screen. Also by using this projection the
boundary box of the human or animal can be shown in the wind screen at the right position
of the object which the driver could see through the windscreen

37 | P a g e
8.REFERENCES

1. K. Rumar, Adaptive illuminationReportsystemsnofor.UMTRImotor-97vehicles:-


7.AnnTowardsArbor,MI:amoreThe intelligent headlighting system,
University of Michigan Transport Research Institute, 1997.
2. P.A. Thompson, Daytime running lamps (DRLs) for pedestrian protection ,

Proceedings of Progress in automotive lighting, Darmstadt, Gerrmany, 2003.

H. Nanda and L. Davis, “Probabilistic Template Based Pedestrian Detection in


3.
Infrared Videos,” in Procs. IEEE Intelligent Vehicles Symposium 2002, June 2002.
4. Procs. IEEE Intelligent Vehicles Symposium 2002, June 2002.
5. Y. L. Guilloux and J. Lonnoy, “PAROTO Project: The Benefit of Infrared Imagery for
Ob-stacle Avoidance,” in Procs. IEEE Intelligent Vehicles Symposium 2002, June2002.
6.IEEE Intl. Conf. on Pattern Recognition, pp. 1325–1330, June 1998
7.
Infrared Images,” in , June 2003. in press. Procs. IEEE Intelligent Vehicles Symposium
2003
Www.Wikipedia.Org

En.Wikipedia.Org/Wiki/Night_vision_device

Www.Morovision.Com/How_thermal_imaging_works.Htm

En.Wikipedia.Org/Wiki/Night_vision

38 | P a g e
9.BIBLIOGRAPHY

http://www.pspc.dibe.unige.it/~drivsco
http://www.bmw.com/com/en/newvehicles/6series/coupe/2007/allfacts/ergon
omics_nightv ision.html
http://www.mercedesforum.com/m_35841/tm.htm http://www.gps4us.com/news/post/Windshield-
projection-technology-renders-GPS-navigation-route

39 | P a g e
40 | P a g e
41 | P a g e
42 | P a g e

You might also like