FPGACam - Real Time Video Processing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

IET Circuits, Devices & Systems

Received: 25 May 2020

DOI: 10.1049/cds2.12074
- -Revised: 17 March 2021

O R I G I N A L R E S E A R C H PA P E R
Accepted: 19 March 2021

-
FPGACam: A FPGA based efficient camera interfacing
architecture for real time video processing

Sayantam Sarkar1 | Satish S. Bhairannawar2 | Raja K.B.3

Abstract
1
Department of Electronics and Communication
Engineering, Vijaya Vittala Institute of Technology,
Bangalore, Karnataka, India
In most of the real time video processing applications, cameras are used to capture live
2
video with embedded systems/Field Programmable Gate Arrays (FPGAs) to process and
Department of Electronics and Communication
Engineering, Shri Dharmasthala Manjunatheshwara convert it into the suitable format supported by display devices. In such cases, the
College of Engineering and Technology, Dharwad, interface between the camera and display device plays a vital role with respect to the
Karnataka, India quality of the captured and displayed video, respectively. In this paper, we propose an
3
Department of Electronics and Communication efficient FPGA‐based low cost Complementary Metal Oxide Semiconductor (CMOS)
Engineering, University Visvesvaraya College of
camera interfacing architecture for live video streaming and processing applications. The
Engineering, Bangalore, Karnataka, India
novelty of our work is the design of optimised architectures for Controllers, Converters,
Correspondence and several interfacing blocks to extract and process the video frames in real time effi-
Sayantam Sarkar, Department of Electronics and ciently. The flexibility of parallelism has been exploited in the design for Image Capture
Communication Engineering, Vijaya Vittala Institute and Video Graphics Array (VGA) Generator blocks. The Display Data Channel
of Technology, Bangalore, Karnataka‐560077, India.
Conversion block required for VGA to High Definition Multimedia Interface Con-
Email: sayantam.61@gmail.com
version has been modified to suit our objective by using optimised Finite State Machine
and Transition Minimiszed Differential Signalling Encoder through the use of simple
logic architectures, respectively. The hardware utilization of the entire architecture is
compared with the existing one which shows that the proposed architecture requires
nearly 44% less hardware resources than the existing one.

1 | INTRODUCTION (iv). Display: This block accepts the processed data and converts
it into the required format supported by the display device.
Vision is one of the most prominent senses present in the In this paper, we propose a new Very Large Scale Integrated
humans [1] due to which real time vision based system or a part Circuit architecture to interface low cost Complementary Metal
of the system are commonly used in various real‐time applica- Oxide Semiconductor (CMOS) camera and display device with
tions. In general, any real time video/image processing tech- processing elements to FPGA board efficiently and also to
nique can be split into four main processing blocks, namely display the video directly using a display device, such as monitor
Sensor, Memory, Processing Unit, and Display [1, 2]. (i). Sensor: or TV. The entire architecture is optimised to get lower hardware
It is used to capture video sequences from the external envi- utilizations without affecting the architectural accuracy which is
ronment and transform it into corresponding electrical signals implemented using Vivado 2018.3 tool where the coding is
suitable for further processing. (ii). Memory: It is internal RAM performed by using the standard Very High‐Speed Integrated
where the captured video sequences are stored temporarily for Circuit Hardware Description Language (VHDL) language [3].
further processing. This block also helps to synchronize data This architecture is synthesized and tested in real time using
between the sensor and processing unit where both the blocks Digilent NexysVideo (xc7a200t-1sbg484c) FPGA board [4] and
are operated at different frequencies. (iii). Processing Unit: In Zybo Z7-10 (xc7z010-1clg400c) FPGA Board [5] separately
this section, the required processing algorithm/architecture where NexysVideo is of a medium level FPGA and Zybo Z7‐10
is implemented. This block accepts data from Memory. is of a low level FPGA. The level of FPGA is generally defined

This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is
properly cited.
© 2021 The Authors. IET Circuits, Devices & Systems published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.

IET Circuits Devices Syst. 2021;1–16. wileyonlinelibrary.com/journal/cds2

- 1
2 SARKAR ET AL.
-
by both the cost and hardware complexity, such as logic den- processing. The disadvantage of this architecture is the less
sities, internal memory, and DSP blocks etc. overall frame rate. It is necessary to understand sensor design
to interface a camera properly with any of the processing el-
ements. Zhao et al. [11] present a 64 x 64 array image sensor
1.1 | Contributions architecture which is designed using the UMC 0.18 μm tech-
nology which has different user defined operating modes
The novel concepts of this paper are listed as follows: depending upon applications. The circuit used to read the
sensor captures the rows present in the array sequentially and
(i). The Camera Controller block is optimised by the generates an analog voltage which is then digitized by an on‐
Modified Serial Camera Control Bus (SCCB) Conver- chip Analog to Digital Converter. This architecture is able to
sion block at the architectural level. produce images of 64 � 64 resolution at 100 fps.
(ii). The Image Capture and Video Graphics Array (VGA)
Generator blocks are optimised using parallel
architecture. 3 | PROPOSED ARCHITECTURE
(iii). The design complexity of different Colour Plane Con-
version blocks are minimized at the architectural level The Digilent NexysVideo [4] and Zybo Z7-10 [5] FPGA Board
using adders and shifters. is used separately to interface the OmniVision OV7670 [12]
(iv). TheVGA to High Definition Multimedia Interface and OV9655 [13] cameras which further use the two wire SCCB
(HDMI) Conversion block is optimised using modified interface [14] for initializing their internal resistors to generate
Display Data Channel (DDC) Conversion and TMDS specific user defined video formats. The proposed architecture
Encoder blocks, respectively, where the optimised Finite used to interface camera with FPGA is shown in Figure 1. In the
State Machine (FSM) is used to modify the DDC Con- proposed architecture, different blocks need different clock
version and the Transition Minimiszed Differential frequencies (i.e. clk1, clk2, and clk3, respectively) to generate
Signalling (TMDS) Encoder is optimised using Compar- proper output video sequences for capture and display. So, using
ator and Addition blocks. the Phase Locked Loop/MMCE IP core [15] present in a Xilinx
Vivado [16] tool as a clock generator, the architecture generates
different clock frequencies accurately for proper operations.
2 | RELATED WORKS Initially the camera is configured by the Optimised
Controller block to generate a video sequence of 640 � 480
Normally, embedded systems with hardware/software co‐ resolution at 30 fps using the two wired SCCB interfacing [14]
simulation techniques are widely used to implement video protocol. The generated video is of the RGB565 [17] format,
processing systems due to the ease of implement. Said et al. [6] and for the proper synchronization between the camera and
proposed a video interface technique where the Xilinx EDK designed blocks, FPGA should start accepting pixel values
tool is used to interface a Micro‐Blaze embedded processor from the camera serially through the PMOD Ports [18] after
and the architecture is implemented using the embedded C‐ the completion of the camera configuration through register
language onto the processor. This system uses a Micron setting. The Optimised Image Capture block converts the
MT9V022 VGA camera to capture videos at the rate of 60 camera output into the proper serial format using corre-
frames per seconds (fps) which is then displayed using standard sponding synchronization signals (VSYNC and HSYNC) of
DVI interfaces. This architecture requires a large amount of the camera which is then converted to the RGB444 [17] format
hardware resources and the overall operating speed of the ar- by Optimised RGB565 to RGB444 Conversion block. This
chitecture is low. A similar technique for capturing a video is converted pixel values are then temporarily stored into
presented by Abdaoui et al. [7] which is implemented on a Memory [19] block which helps in synchronizing pixel the
Virtex‐5 FPGA with a co‐procesor to control the overall values between different blocks operating at different fre-
operation. This approach increases the area requirements as quencies [20]. After writing one row into the Memory, the
well as decreases overall frequencies. Biren and Berry [8] Optimised VGA Signal Generator block reads the stored pixel
presented a new camera interfacing architecture which is data through Optimised RGB444 to RGB565 Conversion
implemented on an Altera Cyclone‐III FPGA using different block and then converts this data into the corresponding VGA
IP cores provided by an FPGA manufacturer. The use of IP signal with proper synchronization signals (i.e. vsync and
cores increases the total hardware utilization of the architec- hsync), respectively. Now, depending upon the Mode switch
ture. Along with camera interfaces, many processing algo- value, color or gray pixel format is selected by both the
rithms are implemented for real time video processing. Stereo Optimised Color to Gray Conversion and MUX blocks which
vision based video rectification is presented by Maldeniya et al. are then used to generate the corresponding HDMI format by
[9], where a dual camera is used to capture stereo images and the Optimised VGA to HDMI Conversion block. The VGA
are interfaced with a Spartan‐3E FPGA through the100Base‐T and HDMI signals are connected to the input of the Port
protocol using the embedded processor present in Xilinx EDK Selection block which selects one particular display formatted
tool. Similarly real time motion tracking is presented by Mos- signal from the input depending upon the port_select value
queron et al. [10] where motion is detected from the real time which is connected to the display device such as monitor/TV
video captured by a camera through FPGA using embedded for the purpose of display.
SARKAR ET AL. 3
-

F I G U R E 1 Proposed architecture to interface camera with FPGA and display the video on display device. HDMI, High Definition Multimedia Interface;
VGA, Video Graphics Array

In this architecture, both VGA and HDMI signals are used,


as FPGA boards support the VGA port for displaying while
others support the HDMI port. As a result with some small
user defined modification, our proposed architecture can be
implemented on any kind of FPGA boards.

3.1 | Optimised Controller

The Optimised Controller unit is used to program the internal


F I G U R E 2 Proposed optimised controller architecture. SCCB, Serial
registers of the camera depending upon the corresponding Camera Control Bus
datasheets [12, 13] to set the proper operating mode. For this
implementation, an image size of 640 � 480 is considered at register address and values will certainly affect the quality of the
30 fps whose pixel values are represented by the RGB565 generated video streams. So, correct register address and values
format [17]. The architecture of the Optimised Controller is are stored into the Register Sets block which is then used by the
shown in Figure 2 which consists of the Register Sets and architecture to specify the correct video format.
Modified SCCB Format Generation blocks.

3.1.2 | Modified SCCB format generation


3.1.1 | Register sets
The OmniVision cameras uses the SCCB interfacing [14]
The specifications of the generated video streams of both protocol to access its internal register to define video specifi-
cameras (OV7670 and OV9655) such as video size, color cations generated by the camera. The abbreviation of the
specifications, and frame rates, can be set by the designer SCCB is Serial Camera Control Bus is the simplified version
depending upon the application requirements with the help of of Philips Inter-Integrated Circuit (I2C) [21] protocol.
the corresponding datasheets [12, 13]. To generate a good quality The Modified SCCB Format Generation block fetches the
video, it is essential to assign the proper values at corresponding address and data from the Register Sets block and convert
address in proper sequences, and any mistakes in assigning this these into the corresponding SCCB format values, which is
4 SARKAR ET AL.
-
normally using a serialized data transfer technique. The pseudo- Most of the existing two wire I2C or SCCB interface ar-
code is used to implement this block is given in Algorithm 1. chitecture use the FSM model [22] or interdependent counters
From Algorithm 1, it can be seen that the whole block is [23] to perform serialization which increases the design
designed using Counters, Comparators, and basic logical complexity and hardware requirements.
components. This algorithm is designed with the aim to opti-
mize hardware parameters without affecting functionality.
Extra signals (send and taken) are considered for proper syn-
chronization between two corresponding blocks. Similarly, the
3.2 | Optimised image capture
signal id is used to define the operating mode of the camera
For proper synchronization between different pixels present in
registers (i.e., read or write) by considering a specific value
video frames, the pixel data must be captured depending on
depending upon corresponding camera datasheet [12, 13].
various synchronous signals that is, PCLK, HREF, and VSYNC,
respectively, from the camera. The main task of this block is to
Algorithm 1 Modified SCCB Format Conversion store pixel data into Memory using these synchronous signals
Inputs: data_in, send, id; through proper address (wr_addr) along with pixel values
Variables: divider, busy; (data_out) and write enable (wr) signal. The block diagram of
Outputs: SIOC, SIOD, addr, taken; the Optimised Image Capture block is shown in Figure 3.
The proposed architecture captures the pixel values form
if [{busy(11 : 0) = 2}j{busy(20 : 19) = 2}j the camera and stores it into the Memory block depending on
{busy(29 : 28) = 2}] then VSYNC and HREF signals which are checked by Compar-
SIOD = 0; ator_1 and Comparator_2, respectively. Both blocks generate
else the intermediate reset signal, rst_temp and enable signal, en,
SIOD = data_in(31); respectively which are then used to reset the entire architec-
end if; ture and to stay at previous values. The rst_temp and en
if {(clk)rising_edge} then signals are made high by Comparator_1 and Comparator_2
taken = 0; only when HREF = 1 and VSYNC = 0. At this situation, one
if {busy(31) = 0} then pixel value is sent by the camera periodically at each non‐
SIOC = 0; overlapping second PCLK pulse. To store the pixel values,
if (send = 1) then the feedback architecture is made by interconnecting the AND
busy = (233 − 1); taken = 1; Gate, Merger_1, DFF_1, and Concatenation blocks which are
addr = (addr + 1); able to track second PCLK at valid conditions, and by making
data_temp = {4, id, 132, data_in, 1}; wr = 1, they force the Memory block into write mode.
else Simultaneously DFF_2, Merger_2 and DFF_3 are used to
if (divider = 256) then merge incoming data from the camera at each rising edge
busy = {busy(30 : 0), 0}; divider = 0; PCLK to generate video pixel values in RGB565 format [17].
data_temp = {data_temp(30 : 0), 1}; Also the blocks namely Counter and DFF_4 are used to
else generate the accurate write address values wr_addr to store
divider = (divider + 1); the pixel values in correct predefined locations. This archi-
end if; tecture uses basic gates to reduce hardware complexity and
end if; logic utilizations [3].
end if;
else
if [{busy(31 : 29) = 7}&{busy(2 : 1) = 3}] then 3.3 | Colour plane conversions
SIOC = 1;
else if [{busy(30 : 29) = 0}&{busy(2 : 0) = 0}] The OV7670 [12] and OV9655 [13] cameras support
then RGB565, RGB555, RGB444, Raw Bayer and Processed Bayer
SIOC = 1; colour formats, but the RGB565 format [17] is considered to
else if [{busy(31 : 29) = 6}&{busy(2 : 0) = 0}] generate video frames from these cameras [12, 13]. This is
then because it is nearer to the format of RGB888 color‐space
if {divider(7 : 6) = 0} then which generally used by most of the modern display devices.
SIOC = 0; As a result, at the time of the conversion (i.e. RGB565 to
else RGB888 format) by the display devices, generated errors will
SIOC = 1; be very less. To store the frame pixels temporarily in the
end if; Memory unit using this format requires a high amount of
else memory which is not present in most of the low and medium
SIOC = {divider(7) � divider(9)}; range FPGAs. So, the RGB565 format is converted into
end if; RGB444 format to reduce the memory requirements by
end if; almost 25% of the original requirements. But the RGB444
SARKAR ET AL. 5
-

FIGURE 3 Proposed architecture of optimised image capture

format is not suitable for producing good quality color pixels hardware requirements [24]. To overcome this problem, mul-
and not suitable for many applications. So, it is necessary to tipliers and dividers are replaced by the corresponding shifter
convert the stored RGB444 format into the corresponding and adder blocks [20] which are given in Equation (2) as
RGB565 format. follows:

3.3.1 | Optimised RGB565 to RGB444 XR ðLS2 þ LS3 þ LS4 þ LS5 þ LS6 Þ � xR


2 3 2 3
conversion 4 XG 5 ≈ 4 ðLS2 Þ � xG 5 ð2Þ
XB ðLS2 þ LS3 þ LS4 þ LS5 þ LS6 Þ � xB
The conversion equation [1] is given in Equation (1) as follows:

15
2 3 where, LSn → Left Shift by Position n.
� xR 7 The architecture used to implement this conversion is
3 6 6 31 shown in Figure 4 where Q-format [25] is used to preserve
XR
2 7
15
6 7
4 X G 5 ¼ 6 � xG 7
6
ð1Þ the data accuracy by considering a fractional part for the
6 63
7
XB
7 intermediate stage, but the input and output signals are in
4 15
6 7
� xB
5 normal binary format, respectively. The Concatenation
31 blocks present in the input side separate the different color
components (red, green and blue) from the pixel value and
where, xR → Red Pixel values in RGB565 format. xG → then the required number of 0s are padded to the MSB side
Green Pixel values in RGB565 format. xB → Blue Pixel values to make all three color planes 16‐bit signals. Now using
in RGB565 format. XR → Red Pixel values in RGB444 shifters and adders, the intermediate values are generated
format. XG → Green Pixel values in RGB444 format. XB → which are then concatenated to generate the individual color
Blue Pixel values in RGB444 format. planes in the RGB444 format. These planes are then merged
From Equation (1), it can be seen that multipliers and di- by the Merger block to generate the corresponding RGB444
viders are required for implementation which increases formatted pixel value.
6 SARKAR ET AL.
-
3.3.2 | Optimised RGB444 to RGB565 hStartSync ¼ hRez þ Horizontal Front Porch: ð8Þ
conversion

The equation used to convert the stored RGB444 format into hEndSync ¼ hStartSync þ Horizontal Synchronization:
corresponding RGB565 format [1] is given in Equation (3). ð9Þ

31 vStartSync ¼ vRez þ V ertical Front Porch:


2
ð10Þ
3
� xR 7
3 6 6 15
XR
2 7
63 vEndSync ¼ vStartSync þ V ertical Synchronization: ð11Þ
6 7
4 X G 5 ¼ 6 � xG 7
6
ð3Þ
6 15
7
XB
7
4 31
6 7
� xB vMaxCount ¼ vEndSync þ V ertical Back Porch:
5
ð12Þ
15
The main task of this block is to generate an address for
Hardware utilization which is mainly due to multipliers, is Memory to read the pixel values and convert those into the
reduced by replacing it with shifters and adders [20]. As a corresponding VGA format with required synchronization
result, the Equation (3) is modified into Equation (4): signals as VGA_blank, VGA_hsync, and VGA_vsync.
Comparator_1, Counter_1, Comparator_2, and Counter_2
XR ðRS1 þ LS4 þ LS6 Þ � xR are used to generate horizontal and vertical timing signals,
2 3 2 3
4 X G 5 ≈ 4 ðRS2 þ LS3 þ LS5 þ LS5 þ LS6 Þ � xG 5 ð4Þ namely hCounter and vCounter, respectively, to generate the
XB ðRS1 þ LS4 þ LS6 Þ � xB proper VGA format. Comparator_1 is used to compare the
output of Counter_1 with (hMaxCount - 1). If {(hMaxCount -
where, RSn → Rightt Shift by Position n. 1) > hCounter}, then Comparator_1 activates Counter_1
The hardware architecture of the optimised RGB444 to through the rst signal. Comparator_2 is used to detect the time
RGB565 Conversion block is designed from Equation (4) when hCounter = 0 and activates Counter_2 at that time.
similar to the way the optimised RGB565 to RGB444 Con- Also, Comparator_3 checks whether the output of Counter_2
version block is designed from the Equation (2). is {(vMaxCount - 1) > vCounter} or not, and depending on
this condition at each hCounter = 0, the value of vCounter is
incremented by 1. These two intermediate values (hCounter
3.4 | Optimised VGA Signal Generator and vCounter) are then compared with vRez and hRez values
through Comparator_4, Comparator_5, NOT Gate, and
The converted Pixel values form the Optimised RGB444 to DFF_1 to generate the VGA_blank signal. Also the output of
RGB565 Conversion block is used by the Optimised VGA Comparatot_5 is used by Counter_3 to generate the read
Signal Generator block to generate the corresponding VGA address for the Memory unit and DFF_2 to send the read data
signal. The process of reading the pixel values from the at the rising edge of clock signal. Comparator_6 and
Memory block starts after storing one frame of the video Comparator_7 generate an intermediate signal dependingon
sequence. This avoids the synchronization problem between hStartSync and hEndSync values, respectively, which is then
Memory and the remaining processing blocks. ANDed using the AND Gate and passes as the VGA_hsync
The CMOS camera [12, 13] is programed to generate signal through DFF_3. Similarly, the signal generated by
videos of 640 � 480 resolution. So, the locally generated VGA Comparator_8 and Comparator_9 is ANDed and passed
signals must have the same resolution for regenerating the through DFF_4 to generate the VGA_vsync signal.
same video. Standard parameters [26] are used to generate This architecture requires less hardware resources than the
VGA signals of 640 � 480 resolution. The VGA signal with existing [28] due to the use of basic logic elements in an
25.175 MHz frequency is used to generate a local VGA signal optimised way.
to maintain data comparability with camera modules. The
proposed architecture is used to generate the local VGA signal
given in Figure 5 where the constants values [27] are calculated 3.5 | Optimised Color to Gray Conversion
using Equation (5) to Equation (12).
In many video processing applications, color images are con-
verted into grey scale to reduce the complexity of processing.
hRez ¼ Horizontal V isible Area: ð5Þ The equation for this conversion is derived using the aver-
aging method [1] and is given in Equation (13) as follows:
vRez ¼ V ertical V isible Area: ð6Þ
1
� �
I Gray ¼ � ðR þ G þ BÞ ð13Þ
hMaxCount ¼ Horizontal Line: ð7Þ 3
SARKAR ET AL. 7
-

FIGURE 4 Proposed architecture of optimised RGB565 to RGB444 conversion

where, IGray → Intensity value of Grey Pixel. R → Red Pixel Extended Display Identification Data (EDID) ROM units.
values of the image. G → Green Pixel values of the image. B The EDID [32] is used to store the supporting display related
→ Blue Pixel values of the image. information defined by the HDMI 1.4 standard [31], which is
Equation (13) is modified into Equation (14) to generate then converted into the DDC [33] format by the Modified
efficient hardware architectures [20]. DDC Format Generations block. Further, the pixel values are
encoded through the Modified TMDS Encoder to encode with
I Gray ≈ fðLS2 þ LS4 þ LS6 Þ � ðR þ G þ BÞg: ð14Þ the help of HSYNC and VSYNC signals and then converted
into a serial format by the Serializer block [34].
The hardware architecture of the Optimised Color to Gray
Conversion block is designed from Equation (14) similar to
the way RGB format Conversion blocks are designed from 3.6.1 | EDID ROM
their respective modified equations.
In any HDMI protocol, the operational characteristics of the
video, such as resolution, frame rate and version etc., must be
3.6 | Optimised VGA to HDMI conversion exchanged between source and sink at the beginning for proper
synchronization. These values are normally constant for a spe-
Most commonly VGA, HDMI, DVI etc., interfacing standards cific resolution [32]. The EDID values corresponding to
are used by display devices, such as TV, monitor, etc. Among 640 � 480 resolution are stored into the EDID ROM block in
these standards, HDMI standard is the most commonly used the proper order for further use. The standard file structure for
in newly manufactured display devices due to its support of EDID is considered for this implementation. The EDID ver-
uncompressed high quality audio and video interfaces through sions of 1.3 and above uses a total 256 bytes to define the cor-
a single cable [29]. responding EDID structure. In such cases the Extension Flag is
The generalized architecture of the VGA to HDMI Con- defined by a total of 128 bytes. For our implementation, CEA-
version block [30] is used to build the Optimised VGA to 861 Standards [33] are used to define the Extension Flag field.
HDMI Conversion block by optimizing some of its internal
block at the architectural level where the HDMI 1.4 standard
[31] is considered suitable for transmitting videos of 640 � 480 3.6.2 | Modified DDC Format Conversion
resolution over channel (wire). The proposed Optimised VGA
to HDMI Conversion block consists of Modified TMDS The HDMI standard exchanges EDID ROM data between its
Encoder, Modified DDC Format Generations, Serialise,r and Source and Sink using the DDC Protocol [33] which is normally
8 SARKAR ET AL.
-

FIGURE 5 Proposed architecture of optimised VGA signal generator. VGA, Video Graphics Array

a standard serial signaling protocol. This is almost similar to the bit_counter = 0. When bit_counter = 0, the machine reaches
Philips I2C [21] protocol which consists of three wires, such as the Slave ACK1 state where it waits for the acknowledgment
Serial Data (SDA), Serial Clock (SCL), and high logic value from the slave device. If acknowledgement is received, then it
(+5 V). The DDC format [33] uses Inter-Integrated (I2C) [21] performs similar operation using data from EDID ROM or else
specifications to transfer the TMDS encoded data. To design the starts sending address values once again. Once Slave ACK2 is
proposed DDC Protocol, NXP UM10204 I2C bus specifica- received by the sender, the address value is incremented by ‘1’
tions [35] for single master buses are considered. Those and goes back to the Start state again. This way when Slave
specifications mainly give the details of Start, Stop, and ACK2 is received for the address_max value then the machine
Acknowledgement signals for proper communication. To enters the Stop state and stays in that state until the process is
implement this, the FSM model is used, which is given in restarted when Restart = 1 which forces the machine to go to
Figure 6. The proposed state machine uses 8‐bit data and the Ready state and perform the entire operation once again.
addressing bits. Upon start‐up, the state machine immediately
enters into the Ready state and stays in the same state until
Send = 1 and Restart = 0 which makes the machine go to the 3.6.3 | Modified TMDS encoder
Start state which generates proper start conditions for data
transfer. Now the machine goes to the Address state which Transition Minimized Differential Signaling is abbreviated as
fetches 8‐bit address from the EDID ROM and serialize it using TMDS and is used to encode data at a very high speed for
the PISO architecture [36] which is then tracked by the various video interfaces. It uses a form of 8b/10b encoding
bit_counter variable. The machine stays in this state until [37] technique which reduces electromagnetic interferences to
SARKAR ET AL. 9
-
provide faster signal transmission with reduced noise [38]. The 0)}; bias = {bias + word
TMDS encoder encode the input data and send it serially at a (8) − disparity};
high speed which minimizes transitions by retaining high fre- else
quency transitions for clock recovery. This process keeps the bias = {bias − word(8) + disparity};
minimum number of 1s and 0s in the line nearly equal to improve encoded = (0, word);
the noise margin. The algorithm is used to implement Modi- end if;
fied TMDS Encoding is given in Algorithm-2. The Modified end if;
TMDS encoder unit is designed using basic gates and flip‐flops end if;
with simpler interconnections between them to produce
optimised hardware architecture than the existing [39].

4 | FPGA IMPLEMENTATION
Algorithm 2 Modified TMDS Encoder
The proposed architecture is coded using the standard VHDL
Inputs: data, clk, c, blank; language [3], synthesized and implemented on Digilent Nex-
Variables: ones, word, word_inv, ysVideo (xc7a200t-1sbg484c) [4] and Zybo Z7-10 (xc7z010-
disparity, bias, xored, xnored; 1clg400c) [5] FPGA board separately through the bit-file
Output: encoded; generated by the Xilinx Vivado 2018.3 tool by assigning the
ports in a .xdc file. The generated schematic diagram of the
xored(0) = xnored(0) = data(0); xored proposed architecture after post-implementation step is shown
(8) = 1; xnored(8) = 0; disparity = in Figure 7 for the NexysVideo FPGA board.
(12 + ones); The hardware utilizations of the proposed interfacing ar-
ones = Total number of 10 s present in input chitecture after post-implementation stage including most of the
data; internal blocks are given in Table 1. The hardware utilizations and
for (i = 1; i ≤ 7; i + +) do power requirements of the entire camera interfacing architecture
xored(i) = {data(i) ⨁ xored(i − 1)}; is lesser than the sum of all the individual components present
xnored(i) = {data(i) � xnored(i − 1)}; internally in the architecture for both the cases, respectively,
end for; which is mainly due to the use of the Balanced Synthesis and
if {(ones > 4) j (ones = 4 & data(0) = 0)} Optimizations [16] operation present in the Xilinx Vivado tool.
then
word = xnored; word_inv = not(xnored);
else 5 | REAL TIME IMPLEMENTATION
word = xored; word_inv = not(xored);
end if; The image of the experimental product setup using the pro-
if {(clk)rising_edge} then posed camera interfacing architecture is shown in the Figure 8
if (blank = 0) then where the OV7670 camera is mounted on a stand for better
if (c = 0) then focusing and is connected to a Digilent NexysVideo FPGA
encoded = 852; board through PMOD ports using general purpose jumper
else if (c = 1) then wires. The architecture programs the camera to the correct
encoded = 171; mode and accepts video sequences which are processed and
else if (c = 2) then sent to the available HDMI/VGA port to display through the
encoded = 340; connected display device.
else It is also possible to convert this architecture into a real time
encoded = 683; product which requires replacing the general purpose jumper
end if; wires used in Figure 8 with a custom designed Printed Circuit
else Board (PCB). The architecture of the custom designed PCB is
if {(bias = 0) j (disparity = 0)} then shown in Figure 9 which is a simple two layered PCB used to
if {word(8) = 0} then provide proper connection between the camera and FPGA
encoded = {1, word(7 : 0)}; module.
bias = {bias + disparity};
else
encoded = {2, word(7 : 0)}; 6 | PERFORMANCE EVALUATION
bias = {bias − disparity};
end if; The performance of the proposed architecture is compared with
else if {[bias(3) � disparity(3)] = 1} various existing architectures or techniques with respect to data
then accuracy, board comparability, cost, and hardware utilizations
encoded = {1, word(8), word_inv(7 : to check the superiority of the proposed architecture.
10 SARKAR ET AL.
-

FIGURE 6 Proposed finite state machine model for modified Display Data Channel format conversion
SARKAR ET AL. 11
-

FIGURE 7 Generated schematics of the proposed architecture on NexysVideo FPGA board

TABLE 1 Hardware utilisations of proposed camera interfacing architecture

SCCB Camera Image RGB565 to VGA RGB444 to RGB to DDC TMDS Total
Parameters interface controller capture RGB444 generator RGB565 grey interface encoder module
FPGA Artix‐7 (xc7a200t‐1sbg484c)

Slice LUTs 50 81 53 5 39 6 44 7 41 419

Slice registers 73 76 60 32 55 39 26 25 14 376

Flip‐flops 73 76 60 32 55 39 26 25 14 376

Slices 22 30 38 28 21 30 16 10 11 221

LUTs as logic 50 81 53 5 39 6 44 7 41 419

BRAMs 0 0 0 0 0 0 0 0 0 60

Memory (KB) — — — — — — — — — 225


a
Negative slack — — — — — — — — — 3.556
(ns)

Hold slacka (ns) — — — — — — — — — 0.129

Pulse width — — — — — — — — — 3.000


slacka (ns)

Latency (ms) — — — — — — — — — 0.0256

Total power (W) — — — — — — — — — 0.405

FPGA Zynq‐7(xc7z010‐1clg400c)

Slice LUTs 54 87 60 8 48 10 49 11 46 434

Slice registers 79 83 66 36 61 43 31 27 22 484

Flipflops 79 83 66 36 61 43 31 27 22 484

Slices 25 37 49 30 28 39 20 16 19 283

LUTs as logic 54 87 60 8 48 10 49 10 46 434

BRAMs 0 0 0 0 0 0 0 0 0 60

Memory (KB) — — — — — — — — — 225


a
Negative slack — — — — — — — — — 3.516
(ns)

Hold slacka (ns) — — — — — — — — — 0.117

Pulse width — — — — — — — — — 2.293


slacka (ns)

Latency (ms) — — — — — — — — — 0.0256

Total power (W) — — — — — — — — — 0.361

Abbreviations: DDC, Display Data Channel; SCCB, Serial Camera Control Bus; TMDS, Transition Minimiszed Differential Signalling; VGA, Video Graphics Array.
a
For worst case scenario.
12 SARKAR ET AL.
-
The |Error| is calculated for both of the conversion
processes (i.e., RGB565 to RGB444 and RGB444 to RGB565
respectively) with the corresponding full range of RGB values
of different color planes where only integer values are
considered. The |Error| occurred in both of the conversion
processes is always lesser than ‘1’ due to the use of Qformat
for intermediate calculations. As a result, when both conver-
sion blocks are cascaded to get double conversion, due to
design issues the |Error| introduced at the final result must
also be equal to or less than ‘1’. These small errors are intro-
duced due to truncation errors [3, 24] that occurred in binary
arithmetic calculations.

6.2 | Board compatibility and approximated


costs
FIGURE 8 Experimental setup for real time implementation

The FPGA manufactured by Xilinx and Altera are most


commonly used by the PCB manufacturing companies to make
FPGA boards [41]. As a result, these board manufacturers
supply many extra addon components as accessories to some
specific FPGA boards to perform selected operations, such as
video and audio processing. Along with these accessories,
interfacing program is also supplied by the manufacturer in
terms of IP Cores. Due to various issues such as compatibility,
availability and easier implementation etc., in most of the
cases those IP Cores are developed by embedded programing
with the help of supporting embedded platforms which are
supported by high level FPGA boards only. On the other hand,
the proposed architecture is designed through optimised ar-
chitectures coded using the standard VHDL language [3] with
minimal numbers of IP Cores [15, 19, 34] and connected to
the FPGA board through two PMOD ports [18] which helps
the architecture to be implemented on most of the available
FPGA boards.
The board support comparisons of the proposed and
existing camera interface architectures are given in Table 2
FIGURE 9 Schematic of the proposed PCB architecture
along with corresponding costs, where the cost of the required
peripherals (i.e., stand and connecting PCB respectively) are
also considered along with the camera cost. From the Table 2,
6.1 | Data accuracy it can be seen that the camera interfaces [42–45] developed by
the corresponding manufactures supportsonly a few number of
To check the accuracy of the Optimised RGB565 to RGB444
their boards, whereas the proposed architecture supports any
Conversion block, the full range of RGB565 values is
FPGA board with minimum peripherals for interfacing. To
considered for each color plane separately and then the cor-
prove this, the proposed architecture is implemented on two
responding RGB444 values are calculated using Equation (1).
different level FPGA boards. Moreover, the cost of the entire
Also, the same inputs are fed to the proposed conversion block
architecture is very less and mainly depends on the CMOS
and the corresponding jErrorj is calculated [40] separately for
camera cost. This is due to the absence of extra processing
each color plane using Equation (15) as follows:
interfaces for the camera module [42–45].

jErrorj ¼ jA − Bj ð15Þ
6.3 | Hardware resources
where, jErrorj→ is the error occurred in proposed
architecture. The utilised hardware resources of most of the subblocks and
total module is compared with the corresponding existing
A → Actual values calculated form conversion equation. technique's hardware resources utilizations to prove that the
B → Output values form proposed architecture. proposed architecture is better in terms of hardware resource
SARKAR ET AL. 13
-
utilizations as well. To get valid hardware resource comparison, 6.3.2 | VGA signal generator
it is essential to use the utilization values of a particular block
for the same or similar kind of FPGA board. The comparison of hardware utilizations of the proposed
VGA Signal Generator with the existing VGA Signal
Generator is compared and given in Table 4. The VGA
6.3.1 | Camera controller Generator architecture presented by Xiaokun et al. [46] is
implemented on Artex‐7 FPGA. The unoptimised way of
The comparison of hardware utilizations of the proposed using logical components to generate the VGA signal leads to
camera controller with the existing controller presented by large hardware utilizations than that of the proposed. Simi-
Xiaokun et al. [46] is compared and given in Table 3. From the larly Arun Babu [47] presented a Graphic Controller which
table, it can be seen that the proposed camera controller ar- was implemented on Virtex 5 FPGA. The use of generalized
chitecture requires less hardware resources than the existing for IP Cores inside the architecture in the unoptimised way re-
the same FPGA board (Artix 7). This is because, the proposed quires large hardware resources than that of the proposed.
camera controller architecture is optimised with the specifica- On the other hand, the proposed VGA Generator architec-
tions of the OmniVision camera series. ture is optimised with respect to the camera architectural
design using basic gates in a very optimised way without
using any IP Cores.
T A B L E 2 Comparisons between existing and proposed module
FPGA board supports
6.3.3 | TMDS encoder
Techniques Supported FPGA boards Cost ($)
TDNext [42] ZedBoard 70 The hardware utilizations of the presented TMDS Encoder is
compared with the existing TMDS encoder presented by
Python 1300C [43] Zedboard and MicroZed 500
Roshan and Patil [48] which is shown in Table 5. The
Pcam 5C [44] Zybo and ZedBoard 45 TMDS encoder proposed by Roshan and Patil [48] was
D8M [45] DE2‐115, DE1‐SoC and C5G 80 implemented on Spartan 6 FPGA. The proposed architecture
a
requires very less hardware for the same (Spartan 6) FPGA.
Proposed Any FPGA board 20b
This is mainly due to the use of TMDS algorithm which is
30c optimised by using basic gates, comparators, adders, and
a
With required peripherals. subtractors.
b
For OV7670 Camera.
c
For OV9655 Camera.

TABLE 5 Comparisons between existing and proposed TMDS


encoder
TABLE 3 Comparisons between existing and proposed camera
controller Parameters Roshan and Patil [48] Proposed architecture

Parameters Xiaokun et al. [46] Proposed architecture FPGA Spartan‐6 Spartan‐6

FPGA Artix‐7 Artix‐7 Slice LUTs 148 52

Slice LUTs 87 81 Slice registers 87 35

Slice registers 90 76 Flipflops 82 35

T A B L E 4 Comparisons between
Parameters Xiaokun et al. [46] Arun Babu [47] Proposed architecture
existing and proposed VGA signal generator
FPGA — Virtex‐5 Virtex‐5

Slice LUTs — 241 76

Slice registers — 109 97

Flipflops — 109 97

FPGA Artix‐7 — Artix‐7

Slice LUTs 134 — 39

Slice registers 100 — 55

Abbreviation: VGA, Video Graphics Array.


14 SARKAR ET AL.
-
TABLE 6 Comparisons between existing and proposed camera interface architecture

Parameters Xiaokun et al. [46] Bhowmik et al. [49] Zhou et al. [50] Xilinx IP Core [51] Honegger et al. [52] Proposed architecture
FPGA Artix‐7 — — — Artix‐7 Artix‐7

Slice LUTs 753 — — — 14,563 419

BRAMs 106 — — — — 60

Memory (KB) — — — — 3472 225

Latency (ms) — — — — 2 0.0256

FPGA — Zynq‐7 Zynq‐7 Zynq‐7 — Zynq‐7

Slice LUTs — 7357 39,789 30,614 — 434

Flipflops — 8457 43,871 — — 484

6.3.4 | Total module be used for real time video capturing and processing. To
generate efficient interfacing hardware architecture, each
The hardware resources comparison of the proposed camera subblock (i.e., Controller, Image Capture, Color Plane
interfacing architecture with similar existing architecture is Conversion, VGA Signal Generator, and VGA to HDMI
given in Table 6. An effective CMOS camera interface archi- Conversion) are optimised using different optimizing tech-
tecture is presented by Xiaokun et al. [46] which is imple- niques. Parallel architecture is used to optimise the Image
mented on the Artix 7 FPGA board and verilog HDL is used Capture and VGA Signal Generator blocks. Similarly, the
for coding. The main reason behind the large hardware utili- use of shifters and adders reduces the hardware utilization
zation than that of the proposed is the use of subblocks in the of different Color Plane Conversion blocks, respectively.
overall architecture without any optimization. A CPU and Modified DDC Conversion and TMDS Encoder blocks are
FPGA base camera interfacing architecture is presented by used to optimize the VGA to HDMI Conversion where
Bhowmik et al. [49] which is implemented on the Xilinx Zynq the DDC Conversion block is optimised using the FSM
7000 SoC FPGA board with Vivado tool. The main reason for and TMDS Encoder block is modified at the architectural
large hardware requirements by this architecture than existing level.
those of the proposed is the use of generalized IP Cores to However, at the time of optimization of these architec-
implement total architecture without proper optimizations. A tures, data accuracy was also taken care of by considering
smart camera architecture is presented by Zhou et al. [50] sufficient intermediate bit sizes. As a result, the proposed
which is implemented on the Zynq 7020 FPGA board using method has less hardware complexity, low cost, and is more
the Sum of Absolute Differences (SAD) based Mosaic Algo- accurate compared to existing techniques. In future, an inter-
rithm. The use of SAD based Mosaic Algorithm in an unop- face of higher resolution camera (HD, 4K etc.) with more
timised way increases the hardware requirements. The Xilinx frame rates and more sophisticated processing algorithm to
provides IP Cores [51] to interface a specific camera with the generate better quality images will be implemented.
Zynq 7000 SoC FPGA board using the ARM CortexA9 pre-
sent in that board through embedded programing techniques.
This increases hardware requirements drastically. A new video 8 | ABBREVIATIONS AND APPENDICES
processing system is presented by Honegger et al. [52] which
uses the FPGA for image acquisition and a mobile CPU for The abbreviations used in this paper are as follows
processing. This architecture uses an MT9V034 CMOS camera DDC ← Display Data Channel; EDID ← Extended
and an Artix 7 FPGA to acquire video frames. The interfacing Display Identification Data; HDMI ← High Definition
architecture is implemented using the embedded technique Multimedia Interface; I2C ← InterIntegrated Circuit; SCCB ←
which increases hardware utilizations. On the other hand, each Serial Camera Control Bus; TMDS ← Transition Minimised
subblock is designed in an optimised way to get optimised Differential Signalling; VGA ← Video Graphics Array.
hardware utilizations for the entire architecture. This is ach- Similarly, the symbols are used in alright the Algorithms
ieved by replacing complex logical architectures with the cor- and Figures present in this paper are as follows
responding logical architectures with basic gates. �← XNOR Operation. ⨁← XOR Operation; & ← AND
Operation; j ← OR Operation; (,)/{ , }/[ , ] ← Concatenation
Operation; (x : y) ← Data of Length (x−y+1); (N) ← Bit
7 | CONCLUSION Present in Nth Position of a Binary Digit.

In this paper, an efficient hardware architecture to interface O R CI D


low cost digital camera with FPGA is proposed which can Sayantam Sarkar https://orcid.org/0000-0002-5763-8692
SARKAR ET AL. 15
-
REF E R E NCE S 23. Bharath, K.B., Kumaraswamy, K.V., Swamy, R.K.: Design of arbitrated
1. Gonzalez, R.C., Woods, R.E.: Digital Image Processing, (3rd ed.). I2C protocol with DO254 compliance. In: Proceedings of the IEEE
Pearson Education, London (2008) International Conference on Emerging Technological Trends, pp. 1–5,
2. Murat Tekalap, A.: Digital Video Processing, (2nd ed.). Prentice Hall, India (2016). https://doi.org/10.1109/ICETT.2016.7873672
Hoboken (2015) 24. Weste, N., Eshraghian, K.: Principles of CMOS VLSI Design: A
3. Charls, H. Roth.: Digital System Design using VHDL. Cengage Learning, System Prospective, (2nd ed.). Addision‐Wesely Publication, Boston
Boston, (2003) (2002)
4. Digilent NexysVideo FPGA Board User Guide. https://reference. 25. Singh, A., Srinivasan, S.: Digital Signal Processing Implementations:
digilentinc.com/reference/programmablelogic/nexysvideo/ Using DSP Microprocessors–With Examples from TMS320C54xx.
referencemanual. Accessed 26 Mar 2020 Nelson Engineering, Boston (2003)
5. Digilent Zybo Z7 FPGA Board User Guide. https://reference. 26. WANG, H.B., et al.: VGA display driver design based on FPGA. In:
digilentinc.com/_media/reference/programmablelogic/zyboz7/zyboz7_ Proceedings of the 18th IEEE/ACIS International Conference on
rm.pdf. Accessed 26 Mar 2020 Computer and Information Science (ICIS), 530–535, China (2019).
6. Said, Y., et al.: Embedded realtime video processing system on FPGA. In: https://doi.org/10.1109/ICIS46139.2019.8940166
Proceedings of the 7th International Conference on Image and Signal 27. Shi, D., Ye, X.: Design of VGA display system based on CPLD and
Processing, pp. 85–92. Springer, North Africa (2012). https://doi.org/ SRAM. In: Proceedings of the Third International Conference on
10.1007/9783642312540_10 Intelligent System Design and Engineering Applications, pp. 579–582,
7. Abdaoui, A., et al.: Video acquisition between USB 2.0 CMOS camera China (2013). https://doi.org/10.1109/ISDEA.2012.141
and embedded FPGA system. In: Proceedings of the 5th International 28. Wang, G., Guan, Y., Zhang, Y.: Designing of VGA character string
Conference on Signal Processing and Communication Systems, pp. 1–5, display module based on FPGA. In: Proceedings of the IEEE
USA (2011). https://doi.org/10.1109/ICSPCS.2011.6140863 International Symposium on Intelligent Ubiquitous Computing and
8. Biren, M., Berry, F.: DreamCam: a modular FPGAbased smart camera Education, pp. 1–5, China (2009). https://doi.org/10.1109/IUCE.
architectures. J. Syst. Archit. 26, 519–527 (2014). https://doi.org/10. 2009.12
1016/j.sysarc.2014.01.006 29. Venuti, S., Chase, J., Iwami, L.: Highdefinition multimedia interface
9. Maldeniya, B., et al.: Computationally efficient implementation of video (HDMI). In: Handbook of Visual Display Technology, pp. 1–13.
rectification in and FPGA for stereo vision applications. In: Proceedings Springer (2015). https://doi.org/10.1007/9783642359477_372
of the 5th IEEE International Conference on Information and Auto- 30. Kubiak, I., Przybysz, A.: DVI (HDMI) and display port digital video
mation for Sustainability, pp. 219–224, Sri Lanka (2010). https://doi. interfaces in electromagnetic eavesdropping process. In: Proceedings of
org/10.1109/ICIAFS.2010.5715663 the International Symposium on Electromagnetic Compatibility EMC
10. Mosqueron, R., Dubois, J., Paindavoine, M.: Embedded image process- EUROPE, pp. 388–393, Spain (2019). https://doi.org/10.1109/EMC
ing/comparison for high speed CMOS sensor. In: Proceedings of the Europe.2019.8872097
14th EURASIP European Signal Processing Conference (EUSIPCO 31. HDMI 1.4 Specifications. https://www.hdmi.org/manufacturer/hdmi_
2006), vol. 2010, pp. 1–17. Florence (2010). https://doi.org/10.1155/ 1_4/. Accessed 26 Mar 2020
2010/920693 32. Video Electronics Standards Association Datasheet for EDID:
11. Zhao, B., Zhang, X., Chen, S.: A CMOS image sensor with onchip Enhanced Extended Display Identification Data Standard. Release A,
motion detection and object localization. In: Proceedings of the IEEE Revision 2 (2006)
International Conference on Custom Integrated Circuits, pp. 1–4, USA 33. Video Electronics Standards Association Datasheet for DDC: Enhanced
(2011). https://doi.org/10.1109/CICC.2011.6055400 Display Data Channel (EDDC) Standard Version 1.2 (2007)
12. OmniVision OV7670 Camera Datasheet. https://www.voti.nl/docs/ 34. Xilinx LVDS IP Core. https://www.xilinx.com/support/documenta
OV7670.pdf. Accessed 26 Mar 2020 tion/application_notes/xapp1315lvdssourcesynchserdesclockmultiplica
13. OmniVision OV9655 Camera Datasheet. http://electricstuff.co.uk/ tion.pdf. Accessed 26 Mar 2020
OV9655datasheetannotated.pdf. Accessed 26 Mar 2020 35. NXP UM10204 Bus Specifications. https://www.nxp.com/docs/en/
14. OmniVision SCCB Interfacing Datasheet. http://www4.cs.umanitoba. userguide/UM10204.pdf. Accessed 26 Mar 2020
ca/∼jacky/Teaching/Courses/74.795LocalVision/ReadingList/ovsccb.pdf. 36. Sobhan Bhuiyan, M.A., et al.: Low Power D Flip‐Flop Serial In/Parallel
Accessed 26 Mar 2020 Out Based Shift Register. In: Proceedings of the IEEE International
15. Xilinx Clocking Wizard IP Core Datasheet. https://www.xilinx.com/ Conference on Advances in Electrical, Electronic and Systems Engi-
support/documentation/ip_documentation/clk_wiz/v5_3/pg065clkwiz. neering, pp. 180–184. IEEE, Putrajaya (2016). https://doi.org/10.1109/
pdf. Accessed 26 Mar 2020 ICAEES.2016.7888034
16. Vivado User Manual. https://www.xilinx.com/support/documentation/ 37. Park, J., et al.: A novel stochastic modelbased eyediagram estimation
sw_manuals/xilinx2017_1/ug910vivadogettingstarted.pdf. Accessed 26 method for 8B/10B and TMDSencoded highspeed channels. IEEE
Mar 2020 Trans. Electromagn. C. 60(No. 5), 1510–1519 (2018). https://doi.org/10.
17. Hagara, M., et al.: Grayscale image formats for edge detection and for its 1109/TEMC.2017.2766295
FPGA implementation. Microprocess. Microsyst. 75, 103056 (2020). 38. Sreerama, C.: Effects of skew on EMI for HDMI connectors and cables.
https://doi.org/10.1016/j.micpro.2020.103056 In: Proceedings of the IEEE International Symposium on Electromag-
18. Digilent PMOD Specifications. https://reference.digilentinc.com/ netic Compatibility, pp. 452–455, USA (2006). https://doi.org/10.1109/
reference/pmod/specification. Accessed 26 Mar 2020 ISEMC.2006.1706346
19. Xilinx Block RAM IP Core Datasheet. https://www.xilinx.com/support/ 39. Fang, C.H., Lung, I.T., Fan, C.P.: Absolute difference and lowpower bus
documentation/user_guides/ug473_7Series_Memory_Resources.pdf. encoding method for LCD digital display interfaces. VLSI Des. 2012, 1–6
Accessed 26 Mar 2020 (2012). https://doi.org/10.1155/2012/657897
20. Bhairannawar, S.S., et al.: Implementation of fingerprint based biometric 40. Sarkar, S., Bhairannawar, S.S.: Efficient FPGA architecture of optimised
system using optimised 5/3 DWT architecture and modified CORDIC Haar wavelet transform for image and video processing applications.
based FFT. Circ. Syst Signal Process. 37(1), 342–366 (2018). https://doi. Multidimens. Syst. Signal Process. 32, 821–844 (2021). https://doi.org/
org/10.1007/s0003401705550 10.1007/s11045020007594
21. Fenger, C., Paret, D.: The I2C Bus: From Theory to Practice, (Har/Dskt 41. Wolf, W.: FPGA Based System Design. Pearson Education, London
edn.) Wiley‐Blackwell, Hoboken (1997) (2004)
22. Hu, Z.W.: I2C protocol design for reusability. In: Proceedings of the 3rd 42. Avnet TDNext Camera Datasheet. http://zedboard.org/sites/default/
IEEE International Symposium on Information Processing, pp. 83–86, files/product_briefs/5338PBAESPMODTDM114GV1d.pdf. Accessed
China (2010). https://doi.org/10.1109/ISIP.2010.51 26 Mar 2020
16 SARKAR ET AL.
-
43. Avnet Python 1300C Camera Datasheet. https://www.avnet.com/ Processing, pp. 1–6, Germany (2017). https://doi.org/10.1109/DASIP.
opasdata/d120001/medias/docus/187/ 2017.8122128
PBAESCAMONP1300CGV2ProductBrief.pdf. Accessed 26 Mar 2020 50. Zhou, W., et al.: Realtime implementation of panoramic mosaic camera
44. Digilent Pcam5C Camera Datasheet. https://store.digilentinc.com/ based on FPGA. In: Proceedings of the IEEE International Conference
pcam5c5mpfixedfocuscolorcameramodule. Accessed 26 Mar 2020 on Real Time Computing and Robotics, pp. 204–309, Cambodia (2016).
45. Terasic D8M Camera Datashee. https://www.intel.com/content/dam/ https://doi.org/10.1109/RCAR.2016.7784026
alterawww/global/en_US/portal/dsn/42/docusdsnbk425008051605255 51. Xilinx Camera IP Core. https://www.xilinx.com/support/documentation/
d8mgpiousermanual.pdf. Accessed 26 Mar 2020 application_notes/xapp7941080p60camera.pdf. Accessed 26 Mar 2020
46. Yang, X., Zhang, Y., Wu, L.: A scalable image/video processing 52. Honegger, D., Oleynikova, H., Pollefeys, M.: Realtime and low latency
platform with open source design and verification environment. In: embedded computer vision hardware based on a combination of FPGA
Proceedings of the 20th IEEE International Symposium on Quality and mobile CPU. In: Proceedings of the IEEE/RSJ International Con-
Electronic Design (ISQED), pp. 110–117, USA 6–7 March 2019. ference on Intelligent Robots and Systems, pp. 4930–4935, USA (2014).
https://doi.org/10.1109/ISQED.2019.8697816 https://doi.org/10.1109/IROS.2014.6943263
47. Babu, A.: FPGA based Graphics controller. Int. J. Eng. Res. Technol.
3(8), 1–6 (2015). 22780181
48. Kumar Chate, R., Patil, S.S.: Serial transmission of video signal using How to cite this article: Sarkar S, Bhairannawar SS,
TMDS encoder and VHDL implementation. Int. J. Adv. Res. Electr. K.B. R. FPGACam: A FPGA based efficient camera
Electron. Instrum. Eng. 7(7), 3119–3124 (2018). https://doi.org/10. interfacing architecture for real time video processing.
15680/IJIRCCE.2018.0606051
49. Bhowmik, D., et al.: Power efficient dataflow design for a heterogeneous
IET Circuits Devices Syst. 2021;1–16. https://doi.org/
smart camera architecture. In: Proceedings of the IEEE International 10.1049/cds2.12074
Conference on Design and Architectures for Signal and Image

You might also like