Sonar Radar Using Arduino 2023-2024
Sonar Radar Using Arduino 2023-2024
Sonar Radar Using Arduino 2023-2024
CHAPTER 1
INTRODUCTION
Ultrasonic radar systems have gained popularity in various applications due to their
reliability, affordability, and ease of integration. These systems utilize ultrasonic sensors to
detect objects and measure distances by emitting sound waves at frequencies beyond human
hearing. When these waves encounter an object, they reflect back to the sensor, allowing the
system to calculate the distance based on the time taken for the echoes to return.
The core principle behind ultrasonic radar is the time-of-flight (ToF) measurement, which
involves calculating the round-trip time of the sound waves. This technology is commonly
used in automotive parking assistance, robotics, and industrial automation for collision
avoidance, object detection, and level measurement.
Ultrasonic radar systems are favored for their ability to operate effectively in various
environmental conditions, including low visibility and darkness, where optical sensors might
fail. They are also immune to interference from other sensors, making them reliable in
complex environments.
CHAPTER 2
LITERATURE SURVEY
Rafael E. Carrillo, Adrien Besson, et.al (2018) proposed a system using a new
cocludes that Ultrafast imaging based on plane-wave (PW) insonification is an active area of
research due to its capability of reaching high frame rates. Several approaches have been
proposed either based on either of Ftheier-domain reconstruction or on delay-and-sum (DAS)
reconstruction. Using a single PW, these techniques achieve low quality, in terms of
resolution and contrast, compared to the classic DAS method with focused beams. To
overcome this drawback, compounding of several steered PWs is needed, which currently
decreases the high frame rate limit that could be reached by such techniques. Based on a
compressed sensing (CS) framework, we propose a new method that allows the
reconstruction of high quality ultrasound (US) images from only 1 PW at the expense of
augmenting the computational complexity at the reconstruction. In this paper, a novel
approach for Ftheier-based beamforming is proposed. Exploiting sparsity of US images in a
Sparsity averaging model, it allows to recover high quality images using l1-minimization
algorithm. This leads to an increase of the CNR of approximately 2dB compared to all the
Ftheierbased and Space-based techniques for 1 insonifications, while keeping the same
spatial resolution.
Shun Miao, et.al (2018) developed a Convolutional Neural Network (CNN) regression
approach to address the two major limitations of existing intensity-based 2- D/3-D
registration technology: 1) slow computation and 2) small capture range. The CNN regressors
are then trained for local zones and applied in a hierarchical manner to break down the
complex 15 regression task into multiple simpler sub-tasks that can be learned separately.
Weight sharing is furthermore employed in the CNN regression model to reduce the memory
footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical
applications, demonstrating its significant advantage in 6 providing highly accurate real-time
2-D/3-D registration with a significantly enlarged capture range when compared to intensity-
based methods.
Adrien Besson, Miaomiao Zhang, et.al (2019) designed a system on Ultrafast imaging
based on plane-wave (PW) insonification, which is an active area of research due to its
capability of reaching high frame rates. The framework takes advantage of both the ability to
formulate the imaging inverse problem in the Ftheier domain and the sparsity of US images
in a sparsifying domain. We show, by means of simulations, in vitro and in vivo data, that the
proposed framework significantly reduces image artifacts, i.e., measurement noise and
sidelobes, compared with classical methods, leading to an increase of the image quality. In
this paper, a novel framework for Ftheier-based reconstruction of signals obtained with
several PW insonifications has been proposed. The framework relies on the ability to pose the
Ftheier reconstruction problem as an illposed inverse problem and on the sparsity of the US
images in an analysis domain. The reconstruction is achieved by solving an l1-minimization
problem.
Adrien Besson, Rafael, et.al (2017) described two applications of the framework
namely the sparse inversion of the beamforming problem and the compressed beamforming
in which the framework is combined with compressed sensing. Based on numerical
simulations and experimental studies, we show the advantage of the proposed methods in
terms of image quality compared to classical methods. Two main applications follow on from
this formulation. First it enables to increase the image quality by removing 16 measurement
artifacts induced by the gridding operation. Secondly, it is suitable to the CS framework and
enables the reconstruction of high quality images from under sampled raw data acquired
using only a few transducers.
Yang Zhang, Yuexin Guo, et.al (2017) proposed new that achieves high contrast and
reduces the axial ghost artifact with only three transmissions for ultrafast. A new coherence-
based factor was derived. The raw data from the PW and the spherical wave (SW)
transmissions were compounded based on the factor, with the consideration of their
respective coherence and interrelationship, to suppress the side lobes and reduce the axial
artifact. Field II simulations show that the proposed method greatly reduced the axial artifact
by 20 dB~35 dB compared with coherent plane wave compounding (CPW) and suppressed
side lobes by 15 dB~30 dB compared with CPW and sparse SA imaging. we demonstrated
the feasibility of the combined PW and SW transmissions with cross coherence based
reconstruction in reducing the axial ghost artifact and side lobes in ultrafast imaging. Based
on the analysis of axial artifacts and side lobes, proposed method reduce the axial artifacts by
20dB ~ 35 dB, and suppress the side lobes 15dB ~ 30dB. One of the limitations of this work
is the weak energy of the SW transmissions by the leftmost and rightmost elements. Since
only one element was used to generate a SW, the signal would be affected by the noise in the
deep region of interest in the scanned medium. Future work consists in replacing the SW
transmissions by diverging wave transmissions.
Li Fangmin, Chen Ke, et.al, (2017) implemented a system using CNN to regress
3DMM shape and texture parameters directly from an input photo and offering a method for
generating huge numbers of labeled examples. There are two key points of the paper: one is
the training data generation for the model training; the other is the training of 3D
reconstruction model. Experimental results and analysis show that this method costs much
less time than traditional methods of 3D face modeling, and it is improved for different races
on photos with any angles than the existing methods based on deep learning, and the system
has better robustness. We propose instead to use a very deep CNN architecture to regress
3DMM parameters directly from input images.We 17 provide a low-cost solution to the
problem of obtaining sufficient labeled data to train this network.We show the regressed 3D
shapes to be more accurate and robust than those of alternative methods.
Rinan Wei, Fugen Zhou, et.al, (2017) proposed a system using a new a volumetric
imaging method from one X-ray projection utilizing convolutional neural network (CNN) is
proposed. With the aid of principal component analysis (PCA)- estimate the volumetric
image accurately. Due to the high parallelization of CNN, the computing efficiency of the
proposed method is able to meet the realtime requirement of practical treatment (less than
0.05 seconds). A synthetic test using 4D Extended Cardiac-Torso (XCAT) Phantom is carried
out which testify the effectiveness of the method. In this paper, we developed a new
volumetric imaging method from single projection based on CNN. PCA motion model is
applied as prior information for the estimation of volumetric image.
Jorge Racedo and Matthew W. Urban, et.al, (2019) proposed a method that uses
multiple focused ultrasound beams to generate push beams with acoustic radiation force.
Applying these push beams generates propagating shear waves. The propagation motion is
measured with ultrafast ultrasound imaging. The shear wave motion data are directionally
filtered, and a 2-D shear wave velocity (SWV) algorithm is applied to create group velocity
maps. This algorithm uses a moving window and a specified patch for performing
crosscorrelations of time-domain signals. This study presented a systematic evaluation of a 2-
D SWV analysis to determine the effects of varying patch (p) and window (w) sizes has on
various image evaluation metrics. It was found that large values for p and w provided reliable
measurements of SWV in homogeneous phantoms with low CV. For the inclusion phantoms,
different trends for the p and w values for optimizing CNR and bias. This type of study
provides a framework for constructing optimal images that could be reconstruction using an
iterative or multiscale approach.
Jingfeng Lu, Fabien Millioz, et.al (2019) focused on a convolutional neural network
(CNN) architecture for high-quality reconstruction of DW ultrasound images using a small
number of transmissions. Experimentally it is demonstrates that the proposed method
produces highquality images using only three DWs, yielding an image quality equivalent to
the one obtained with standard compounding of 31 DWs in terms of contrast and resolution.
A deep-learning-based method is presented in this work for the 19 reconstruction of
diverging wave imaging. The proposed approach aims at learning a compounding operator to
reconstruct high-quality images using a small number of DWs. The experiments are
performed using a large number in vitro and in vivo samples. The experimental results
demonstrated the effectiveness of the proposed method, producing an image quality
equivalent to the one obtained with standard compounding of 31DWs in terms of contrast and
resolution.
Shohei Ouchi and Satoshi, et.al, (2021) proposed a novel Transformed Image Domain
CNN-CS using multi-channel grouped CNN-based image reconstruction using the Fresnel
transform (eFREBAS transform). Experimental results showed that the proposed method was
able to predict an artifact-free image better than other methods, especially for a 20- 30% low
sampling rate. eFREBAS-CNN enables us to remove most of aliasing artifacts and improves
the restoration of the fine structure of images. These results indicate that the application
transformed image domain-based CNNCS is effective in upgrading reconstruction
performance.
sparse arrays into the method creates SCOBA-3D: a sparse beamformer that offers significant
element reduction and, thus, allows performing 3-D imaging with the restheces typically
available for 2-D setups. To create 2-D thinned arrays, we present a scalable and systematic
way to design 2-D fractal sparse arrays.Fractal array design complements the proposed
beamforming by allowing us to construct sparse arrays where the majority of receive
electronics are discarded. Thus, we reduce the processing rate, cost, and power, facilitating
the use of highperformance 3-D US imaging with limited hardware. The proposed framework
paves the way for affordable ultrafast US devices that perform high-quality 3- D imaging, as
demonstrated using phantom and ex-vivo data.
Jui-Ying Lu, Po-Yang Lee, et.al, (2022) developed a convolutional neural network
(CNN) beamformer based on a combination of the GoogLeNet and U-Net architectures to
replace the conventional delay-andsum (DAS) algorithm to obtain highquality images at a
high frame rate. RF channel data are used as the inputs for the CNN beamformers. The
outputs are in-phase and quadrature phantom experiments revealed that the images predicted
by the CNN beamformers had higher resolution and contrast than those predicted by
conventional single-angle PW imaging with the DAS approach. In in vivo studies, the
contrast-to-noise ratios (CNRs) of carotid artery images predicted by the CNN beamformers
using three or five PWs as ground truths were approximately 12 dB in the transverse view,
considerably higher than the CNR obtained using the DAS beamformer (3.9 dB). Most tissue
speckle information was retained in the in vivo images produced by the CNN beamformers.
In conclusion, only a single PW at 0◦ was fired, but the quality of the output image was
proximal to that of an image generated using three or five PW angles. The quality– frame rate
tradeoff of coherence compounding could be mitigated through the use of the proposed CNN
for beamforming.
CHAPTER 3
In the existing system the ultrasonic sensors measure distance by using ultrasonic waves.The
sensor head emits an ultrasonic wave and receives the wave reflected back from the target.
Ultrasonic Sensors measure the distance to the target by measuring the time between the
emission and reception. An optical sensor has a transmitter and receiver, whereas an
ultrasonic sensor uses a single ultrasonic element for both emission and reception. In a
reflective model ultrasonic sensor, a single oscillator emits and receives ultrasonic waves
alternately. This enables miniaturization of the sensor head. Distance calculation The distance
can be calculated with the following formula:
Distance L = 1/2 × T × C
Where L is the distance T is the time between the emission and reception, and C is the sonic
speed. (The value is multiplied by 1/2 because T is the time for go-and return distance.)
Features shows typical characteristics enabled by the detection system. [Transparent object
detectable] Since ultrasonic waves can reflect off a glass or liquid surface and return to the
sensor head, even transparent targets can be detected. [Resistant to mist and dirt] Detection is
not affected by accumulation of dust or dirt. [Complex shaped objects detectable] Presence
detection is stable even for targets such as mesh trays or springs.
The development of the radar technology took place during the World War II in which it was
used for detecting the approaching aircraft and then later for many other purposes which
finally led to the development of advanced military radars being used these days.
Military radars have a highly specialized design to be highly mobile and easily transportable,
by air as well as ground. Military radar should be an early warning, altering along with
weapon control functions. It is specially designed to be highly mobile and should be such that
it can be deployed within minutes.
An Arduino is actually a microcontroller based kit which can be either used directly by
purchasing from the vendor or can be made at home using the components, owing to its open
sthece hardware feature. It is basically used in communications and in controlling or
operating many devices.
The first object was placed at the distance of 30.5cm measured through a ruler and the system
measured the distance at 32cm.While the second object was placed at a distance of 20 cm and
the system measured it as 21cm. Hence the calculated efficiency turned out to be 95%.The
represents a brief overview of this radar system. Here, as it is shown the controller and
Arduino, with the input Ultrasonic sensor and the output is the servo motor which rotates 180
degrees.
The microcontroller controls all the operations of this system, from rotation of the motors to
the obstacle detection of the ultrasonic and representation of the result on the
screen.represents the system’s block diagram. Here, it can be seen how the work flow in this
radar system. The sensor is going to sense the obstacle and determine the angle of incident
and its distance from the radar. The servo motor is constantly rotating to and from, hence
making the sensor move. The data obtained is encoded and fed to the processing IDE which
represents it on the screen. The results are displayed further in this paper. All these operation
are done by Arduino microcontroller from the rotation of the servo, data collection from the
sensor, feeding the data to encoder to transferring it to the display.
An Arduino is actually a microcontroller based kit which can be either used directly by
purchasing from the vendor or can be made at home using the components, owing to its open
sthece hardware feature. It is basically used in communications and in controlling or
operating many devices. Arduino is an open-sthece electronics platform based on easy-to-use
hardware and software. Arduino boards are able to read inputs - light on a sensor, a finger on
a button, or a Twitter message - and turn it into an output - activating a motor, turning on an
LED, publishing something online.
You can tell why the board what to do by sending a set of instructions to the microcontroller
on the board. To do so you use the Arduino programming language (based on Wiring), and
the Arduino Software (IDE), based on Processing. Over the years Arduino has been the brain
of thousands of projects, from everyday objects to complex scientific instruments. A
worldwide community of makers - students, hobbyists, artists, programmers, and
professionals - has gathered around this open-sthece platform, their contributions have added
up to an incredible amount of accessible knowledge that can be of great help to novices and
experts alike.
Arduino was born at the Ivrea Interaction Design Institute as an easy tool for fast prototyping,
aimed at students without a background in electronics and programming. As soon as it
reached a wider community, the Arduino board started changing to adapt to new needs and
challenges, differentiating its offer from simple 8-bit boards to products for IoT applications,
wearable, 3D printing, and embedded environments.
All Arduino boards are completely open-sthece, empowering users to build them
independently and eventually adapt them to their particular needs. The software, too, is open
sthece, and it is growing through the contributions of users worldwide. As the name indicates,
ultrasonic sensors measure distance by using ultrasonic waves. The sensor head emits an
ultrasonic wave and receives the wave reflected back from the target.
Ultrasonic Sensors measure the distance to the target by measuring the time between the
emission and reception. An optical sensor has a transmitter and receiver, whereas an
ultrasonic sensor uses a single ultrasonic element for both emission and reception. In
reflective model ultrasonic sensor, a single oscillator emits and receives ultrasonic waves
alternately. This enables miniaturization of the sensor head.