Huawei SDC Technical White Paper - 290819
Huawei SDC Technical White Paper - 290819
Huawei SDC Technical White Paper - 290819
Issue V2.1
Date 2019-05-25
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: support@huawei.com
Contents
1 Product Overview
1.1 Introduction
Based on extensive and accumulated experience on video and audio technologies and video
surveillance system integration as well as technical features of network cameras in the
security industry, Huawei rolls out IPC6000 series, M series, and X series Software-Defined
Cameras (SDCs). These cameras are mission-critical products in the Huawei video
surveillance solution.
Huawei IPC6000 series, M series, and X series SDCs are video surveillance product series for
indoor and outdoor surveillance. They integrate a variety of functional modules such as video
compression, network transmission, intelligent algorithms (including license plate recognition,
face detection, facial recognition, facial attribute recognition, and personal attribute
recognition), and alarm handling. Those cameras adopt the embedded Linux operating system
and an embedded web communication module. They provide live video viewing and
pan-tilt-zoom (PTZ) camera control functions over networks (LAN, Internet, or wireless
network) to implement all-dimensional video surveillance. Huawei cameras also provide other
functions such as intelligent analysis, voice intercom, alarm input, relay output, motion
detection, analog video output, SFP interface, and local recording storage on SD cards. Those
cameras can work with the network video storage and recording system and management
platform software to construct a large-scale and distributed intelligent video surveillance (IVS)
system.
Compression algorithm
Ethernet port
(RJ45, etc)
processing unit
CPU
Digital image
encoder
Video output port
(CVBS, etc)
Alarm port
Flash
DRAM Other ports (MIC
memory
and SD card, etc)
As shown in Figure 1-1, an SDC is a highly integrated device that consists of the lens, IR-Cut
Filter Removable (ICR) module, image sensor, digital image encoder, compression algorithm
module, CPU, network unit, and local storage unit.
A scene (monitored object) is imaged on the image sensor through the lens and then
compressed to streams of a specified format (for example, H.264 or H.265) through the
encoder. The CPU then sends the streams and instructions to the surveillance center through
the network unit to implement video surveillance functions such as live video viewing,
recording storage, and recording playback.
Huawei SDCs adopt the integrated design and integrate a variety of functions such as
intelligent analysis (intelligence), real-time stream transmission (transmission), network
control (control), storage management, policy management (management), alarm
management, and SFP module. Additionally, the SDCs support multiple mainstream network
communication protocols such as ONVIF and TEYES.
Huawei SDCs use the embedded operating system and can work independently without the
assistance of computers. Therefore, the SDCs feature high integration and can significantly
enhance surveillance site deployment flexibility and system integration capabilities in new
and reconstructed video surveillance projects. The SDCs support remote maintenance and
alarm reporting, enhancing reliability of the entire video surveillance system and efficiently
reducing project implementation and O&M costs.
2 Related Technologies
To adapt to various complicated and harsh environments, Huawei SDCs adopt a series of
technologies and means that comply with international standards, effectively enhancing the
camera reliability.
At the fabrication processing layer, with proper material selection and surface design, Huawei
cameras achieve the optimal heat dissipation, mould proof, salt spray proof, moisture proof,
and shock proof effect, reduce pollution to the environment, and enhance adaptability to
various common environments.
At the device hardware layer, Huawei provides surge protection, EMC protection system,
thermo design, and power supply design solutions and complies with international universal
standards to design SDCs. This helps enhance the camera reliability, reduce failure points, and
avoid complicated maintenance.
At the network transmission layer, the video buffering technology ensures video data integrity
in case of network outage.
At the data storage layer, the video buffering and digital watermark technologies ensure
surveillance data storage security.
At the overall system layer, the cameras adopt modular design, ensuring high product
performance and reliability.
2.1 HD
2.1.1 H.265 Codec Technology
High Efficiency Video Coding (HEVC), also known as H.265, is a video compression
standard, designed as a successor to the widely used AVC (H.264 or MPEG-4 Part 10). H.265
uses cutting-edge technologies to improve the relationship among the bit rate, encoding
quality, delay, and algorithm complexity to achieve the optimal settings. H.265 focuses on
increasing the compression ratio, enhancing robustness and fault rectification capabilities,
reducing the real-time delay, channel resource obtaining time, and random access delay, and
lowering the complexity. Thanks to algorithm optimization, H.264 can transfer SD digital
images at a bit rate lower than 1 Mbit/s. H.265 can transfer 720p (1280 x 720 pixels) common
HD audio and video at a bit rate ranging from 1 Mbit/s to 2 Mbit/s.
Compared with H.264, H.265 provides more tools to reduce the bit rate. In terms of the
encoding unit, each macroblock (MB) in H.264 is fixed at 16 x 16 pixels, while each
macroblock in H.265 ranges from 8 x 8 pixels to 64 x 64 pixels. For areas with unobvious
color variation (for example, red vehicle body and gray ground), the macroblocks segmented
are relatively larger and there are fewer codes after encoding. For areas with more details (for
example, tyre), the macroblocks segmented are relatively smaller and there are more codes
after encoding. In this case, key areas of images are encoded, reducing the overall bit rate and
enhancing encoding efficiency. Additionally, the intra-frame prediction mode of H.265
supports 33 types of directions (H.264 supports only eight types) and provides better motion
compensation and vector prediction methods.
H.265 is aimed to transfer higher quality network video at a limited bandwidth. Compared
with H.264, H.265 reduces half of required bandwidth resources while producing the same
image quality. In conclusion, H.265 enhances codec efficiency, saves transmission bandwidth
and storage space, and provides technical basis for future higher video resolution
development.
2.1.2 SEC
During video data transmission over networks, if a network exception, unstable
communication, or high packet loss rate occurs, a series of problems may occur on the site,
for example, the received data packet is incomplete, video cannot be properly decoded or
played back, or artifacts or frame freezing occurs on video images.
Huawei SDCs support patented Super Error Correction (SEC) technology. With the
technology, a camera sends data with error correction codes. The recipient performs error
detection on the received data based on the error correction codes. If an error is detected, the
recipient uses the SEC recovery algorithm to recover lost packets, enhancing video image
effect and preventing image artifacts and frame freezing. With the SEC technology, Huawei
SDCs can ensure proper image display (without artifacts or frame freezing) in the case of up
to 20% packet loss rate.
Networ k Packet
loss Recove ry
SEC encoding
The ca mer a colle cts and send s data. The clie nt receives the data wi th packet loss.
To use the SEC function, cameras must support media transmission with the platform in SEC mode.
Huawei SDCs support the 9:16 aspect ratio. In narrow, high, and vertically long scenes, the
corridor mode allows a camera to rotate the video image by 90 degrees, which reduces the
portion of the walls on the video image and focuses on the corridor passage. This ensures that
the effective surveillance area accounts for about 50% of the total image pixels.
As shown in Figure 2-4 and Figure 2-5, in the same scene, the video image at 60 fps is smoother than
that at 30 fps.
2.1.5 P-Iris
The motor in the P-Iris lens precisely controls the position of the iris opening. Together with
the algorithm for optimizing the performance of the lens and image sensor, the P-Iris
automatically provides the best iris position for optimal image quality in all lighting
conditions. In bright situations, the P-Iris limits the closing of the iris to avoid blurring
(diffraction) caused when the iris opening is too small, delivering images with better contrast,
clarity, resolution, and depth of field (DoF).
This feature applies to IPC6681-Z20, IPC6285-VRZ, IPC6285-VMZ, and M series
bullet cameras.
The P-Iris function is enabled by default on the camera web page. Users can set P-Iris
parameters to enable manual iris control.
2.2 Intelligence
2.2.1 Intelligent Encoding
The intelligent encoding of most manufacturers in the industry is implemented through
dynamic GOP (reducing the bit rate) and dynamic ROI (improving the image quality of
moving objects). The I-frame in the dynamic GOP will cause large fluctuation of the overall
bit rate and pose higher network and decoding requirements. As for dynamic ROI, when an
object moves fast, the ROI cannot catch up with the object, and obvious breathing effects can
be detected on the edge of the ROI.
Huawei implements intelligent encoding by adjusting the internal encoding policy in the
encoder, which helps to decrease the bit rate and improve the image quality. The applied
technologies are as follows:
1. Adaptive variable bit rate (VBR) control algorithm
2. Intelligent reference frame mode and virtual I-frame
Adaptive VBR control
Adaptive VBR control allows bit rate fluctuation during bit rate collection to ensure stable
quality of encoded images.
The bit rate control algorithm detects the object status (moving or static) in the current scene,
uses a higher bite rate for encoding when an object is moving, and decreases the bit rate when
the object becomes static. Bit rate control is performed inside the encoder. The algorithm
determines the scene based on the motion amount detected during encoding and then adjusts
the bit rate control policy.
Compared with common VBR control, adaptive VBR control, while ensuring the image
quality, can effectively decrease the bit rate when an object is static and gradually increase the
bit rate when an object is moving. Adaptive VBR control features higher real-time
performance, progressive bit rate control, and excellent image quality.
Intelligent reference frame mode and virtual I-frame
In intelligent reference frame mode, the P-frame references the Instantaneous Decoder
Refresh (IDR) frame (long-term reference frame) and forward reference frame (short-term
reference frame, that is, virtual I-frame). The temporal correlation between the two reference
frames is used to improve the encoding and compression performance. The intelligent
reference frame mode is mainly applied in video surveillance scenarios where cameras are
installed in fixed positions and there are both static and moving persons and objects.
For a static ROI, the temporal correlation between the long-term reference frame and the
current frame can be used to decrease the bit rate significantly and reduce the breathing and
trailing effects.
For a moving ROI, the short-term reference frame is used to perform motion estimation. In
intelligent reference frame mode, the IDR-frame interval is prolonged and virtual I-frames are
inserted periodically. This greatly decreases the bit rate in surveillance scenarios and improves
the image quality.
2. Loitering detection
Users can specify a surveillance area (rectangle or polygon) on the video image. When
an object remains in the surveillance area for a period longer than the preset duration, the
system generates an alarm, and frames and tracks the object. Users can also flexibly set
the detection sensitivity.
3. Intrusion detection
Users can specify a surveillance area (rectangle or polygon) on the video image. When
an object enters the surveillance area, the system generates an alarm, and frames and
tracks the object. Users can also flexibly set the detection sensitivity.
2.2.3 Metadata
To enable the platform to flexibly use the intelligent analysis functions of cameras, Huawei
SDCs extract various objects generated in intelligent analysis and package them into
composite streams. All the object information is called metadata.
The intelligent analysis metadata includes the object location (coordinates, width, and height),
type (people, vehicles, or articles), speed, color, contour, and background (such as width and
height).
Sufficient
bandwidth
Network
Insufficient
bandwidth
Artifacts and packet loss occur if the If adaptive bandwidth is enabled, the bit rate is
bandwidth is 3 Mbit/s for transmitting 1080p automatically adjusted to about 3 Mbit/s, solving
@ 4 Mbit/s video by default. artifacts and packet loss issues.
E ncoding
modul e
Transmission
network
Camera encoding
In the case of fixed network bandwidth, the system can perform traffic shaping on video data
to prevent burst network traffic during video transmission, ensuring video transmission
stability and reliability.
In actual surveillance scenarios, if an object in the surveillance view encounters big changes
or the surveillance view varies greatly due to PTZ device rotation, the data packet size of
I-frames after H.264 encoding will increase sharply and exceeds the value allowed by the
maximum bandwidth on the actual transmission network. The sharp video stream increase
will lead to network congestion, increasing the packet loss and affecting the surveillance
effect.
With the stream smoothing technology, Huawei SDCs can evenly send peak streams at frame
intervals based on user settings to prevent data packet loss due to burst streams, reducing
video streams' requirements on peak network bandwidth and ensuring video stream
transmission stability and reliability (as shown in Figure 2-19).
1M suppression.pcap
2.2.6 iPCA
Packet Conservation Algorithm for Internet (iPCA) is the first multiple-input-multiple-output
quality measurement technology in the industry, which solves the N2 connection issue in
traditional point-to-point quality measurement technologies (BFD, NQA, and Y.1731). iPCA
technology uses the enhanced area-based packet conservation mechanism to monitor the
quality of a connectionless network and also provides accurate fault locating capabilities.
Packet conservation indicates that the number of packets leaving a system (network, link,
device, or board) equals the number of packets arriving at the system. If data flows passing
through a system comply with packet conservation, packet loss does not occur and the packet
transmission quality is ensured. Currently, iPCA mainly measures the packet loss rate, which
is the most important factor that affects service experience.
Internally generated
packets
Measured system
(network/linke/device/board)
Pckets internally
terminated by the system
The iPCA quality measurement mechanism is simple. A measured system is in normal state if
the following condition is met: Number of packets arriving at the system + Number of
internally generated packets = Number of packets leaving the system + Number of packets
internally terminated by the system. If this condition is not met, some packets have been
dropped in the system.
The measurement of a device or link is iPCA device-level measurement. The measurement of
a network consisting of multiple devices is iPCA network-level measurement.
Solution Overview
Measurement Object:
iPCA monitors packet loss at the
device, link, or region level to
measure network quality.
Measurement Scenario:
Branch 2
Branch 1
Device-level monitoring:
Single agile device (excluding non
ENP and packet loss impacts)
Link-level monitoring
Direct link monitoring between
agile devices
HQ
Network-level monitoring
Devices surround the consecutive
domains consisting of non-agile
devices (including third-party devices)
End-to-end link that transmits specified
service flows
The iPCA algorithm is embedded in Huawei SDCs. The data packets can be monitored from
the beginning. If traffic packages are lost over the entire transmission link, frame freezing or
artifacts will occur. eSight can quickly locate failure points to help resolve issues.
[Application Condition]:
iPCA must be used with agile network switches and eSight.
2.2.7 SmartIR
A problem exists on the proportion between the high beam and the low beam for common
infrared cameras. If a camera works in telephoto (long-focus) mode, the radiation distance of
the low beam and the brightness of the high beam are insufficient. Scenes that users focus on
cannot obtain proper infrared radiation brightness. If a camera works in wide-angle (short
focus) mode, the brightness of the low beam is insufficient, while the high beam will result in
over-brightness in the central area of the image. This is called flashlight effect, as shown in
Figure 2-22.
With the SmartIR technology, Huawei SDCs can enable automatic exposure and set an
optimal proportion between the high beam and the low beam based on the current focal length
(zoom), image brightness, and gain. This helps ensure proper image brightness and even
infrared radiation for cameras that work in the range from wide-angle mode to telephoto mode.
and show the image effects in different infrared radiation distances.
2.2.9 ROI
On the one hand, HD cameras increase the image clarity; on the other hand, they cause a
series of challenges to the network bandwidth and storage capacity for HD video surveillance
systems. To alleviate video data transmission and storage pressure and further promote HD
video surveillance system application in various sectors, Region of Interest (ROI) technology
is developed. Users are usually interested in a specified area on video images. The clarity of
the image in the key area can be higher than that in other areas. With the ROI technology,
Huawei SDCs can ensure effective surveillance over a specified area in the case of
insufficient network bandwidth.
Users can specify one or more areas on surveillance images as ROIs. The image quality in the
ROIs can be different (higher or lower) from that in other areas. That is, the system performs
near-lossless compression (high bit rate) on ROIs and lossy compression on non ROIs (low
bit rate), ensuring higher quality for reconstructed images and achieving a higher compression
ratio.
When a face keeps appearing, the camera will capture a face image with the best quality as
required and upload it as metadata. An image is considered of good quality if it is clear and
records the complete frontal face (with both eyes). Such an image is generally used by the
platform for face match or facial recognition.
In the camera web system, users can view detected faces framed in rectangles in real time and
the face match results.
1. If a user disables the defocus detection function and enables it again, the image produced
during the switching period is regarded as clear by default, based on which subsequent
detection is performed.
2. If the automatic focusing function is available, the image delivered after automatic
focusing is complete is regarded as clear by default, based on which subsequent
detection is performed.
3. If a user drags the focusing slider and the image after the slider dragging is clearer than
that after automatic focusing is complete, the image after the slider dragging is used as
the basis for subsequent detection.
Parameter Description
Installation Info Used to query the camera installation height.
Sensor Size Used to query the camera sensor size, including the horizontal and
vertical sizes.
Position Used to set the camera longitude, latitude, azimuth, and tilt angle.
Lens Info Used to query the camera lens information, including the zoom ratio,
digital zoom, focal length, horizontal field of view, and vertical field
of view.
Only IPC6681-Z20 can automatically obtain visible areas. Users need to manually mark visible areas for
other camera models.
Multi-camera collaboration improves the overall recall rate and effectively reduces the false
positives.
2. The DM pushes the new version of the intelligent algorithm to cameras. After receiving
the new algorithm version, the cameras disable the corresponding intelligent application
and start to update the algorithm. When the update is complete, the cameras restart the
corresponding intelligent application.
3. The administrator can also upload the new version of the intelligent algorithm through
the camera web client. In this case, the new version will be delivered to and applied on
the current camera only.
Intelligent Algorithm License Control Process
1. The administrator applies for licenses in batches using the ESN list on the license
platform.
2. The administrator imports the license files into the DM platform. The DM platform
parses the license files and associates them with the corresponding cameras based on the
ESNs.
3. The DM platform delivers the license files to the corresponding cameras.
4. The cameras save the license files to the local computer and start the licensed intelligent
application.
5. Each time a camera restarts, it reads the license file from the local computer and starts
the licensed intelligent application accordingly.
2.2.28 ITS
Intelligent Transportation Surveillance (ITS) cameras are used to detect various traffic
violations in urban roads or highways in real time. Common violations include red-light
running, end-number policy violation, speeding/low speed, not following lane markings,
wrong-way driving, unsafe lane change, parking in yellow zones, bus lane violation, motor
vehicles driving on non-motor vehicle lanes, and large vehicles driving on prohibited lanes.
ITS cameras can also collect traffic flow statistics, perform secondary vehicle feature
recognition, and take snapshots of in-vehicle violations (such as seat belt infractions and
hands-free device infractions).
1. Red-light running of motor vehicles
ITS cameras can detect red-light running in vehicle rear detection mode. This function is
primarily applied in urban intersections or other areas where traffic lights are deployed.
The implementation process is as follows:
1) When an ITS camera detects an object entering the surveillance area before the stop
line, the camera starts tracking the object immediately. If a red light is detected, the
camera takes a snapshot before the vehicle arrives at the stop line. If the camera
determines that a red-light running violation occurs, this snapshot will be the first
evidence image.
2) If the camera detects that the vehicle leaves the stop line at a red light, it takes the
second snapshot.
3) If the camera detects that the vehicle leaves the straight-through trigger line at a red
light, it takes the third snapshot.
In this way, a complete set of red-light running snapshots are taken. The camera will also
take a close-up snapshot, synthesize these snapshots into a violation image, and send the
image to the platform.
2. End-number policy violation
ITS cameras can detect vehicles on road that violate the end number policy.
The implementation process is as follows:
When a vehicle passes, the camera recognizes the license plate of the vehicle and checks
the end number policy. If the vehicle is banned on roads according to the end number
policy, the camera takes snapshots, synthesizes the snapshots into a violation image, and
sends the image to the platform.
3. Speeding/low speed detection
ITS cameras can detect vehicles at a speed greater or less than is reasonable and prudent.
The implementation process is as follows:
When a vehicle passes, the camera detects the vehicle speed. If the vehicle drives at a
speed greater or less than is reasonable and prudent, the camera takes snapshots,
synthesizes the snapshots into a violation image, and sends the image to the platform.
4. Motor vehicles not following lane markings
ITS cameras deployed in ePolice mode can detect vehicles not following lane markings.
The implementation process is as follows:
When a vehicle passes, the camera takes a snapshot of the vehicle. When detecting that
the movement path of the vehicle is against the lane markings, the camera records a
violation, takes snapshots, synthesizes the snapshots into a violation image, and sends
the image to the platform. Not following lane markings includes not following the
straight-through marking, left-turn marking, and right-turn marking.
5. Wrong-way driving
ITS cameras can detect the act of driving a motor vehicle against the direction of traffic.
The implementation process is as follows:
An ITS camera continuously detects the moving direction of motor vehicles. When the
moving direction of a motor vehicle is opposite to the regulated vehicle direction on the
lane, the camera takes two or three snapshots (two snapshots by default), synthesizes the
snapshots into a violation image, and sends the image to the platform.
6. Unsafe lane change
ITS cameras can detect motor vehicles that change lanes illegally.
The implementation process is as follows:
When a motor vehicle enters a surveillance lane, the camera takes a snapshot of the
vehicle and continuously monitors the movement path of the motor vehicle. If the
vehicle crosses the solid line and enters the adjacent lane, the camera takes the second
snapshot showing the lane change process. After the lane change, the camera takes the
third snapshot. The snapshots are synthesized into a violation image, which is then sent
to the platform.
7. Parking in yellow zones
ITS cameras can detect motor vehicles parked in yellow zones where the parking is
prohibited. The implementation process is as follows: The camera continuously monitors
the motor vehicles in its surveillance area. When a vehicle enters the yellow zone and
stays for a certain period (1–180s, configurable), the camera determines that a parking
violation occurs, takes two snapshots by default or three snapshots if specified, combines
the snapshots into a violation image and uploads the image to the platform.
8. Large vehicles driving on prohibited lanes
ITS cameras can detect large vehicles driving on lanes where large vehicles are
prohibited.
The implementation process is as follows:
The camera continuously monitors a lane where large vehicles are prohibited. When a
large vehicle enters the lane, the camera determines that a violation occurs, takes two or
three snapshots (two snapshots by default), synthesizes the snapshots into a violation
image, and sends the image to the platform.
9. Bus lane violation
ITS cameras can detect motor vehicles in bus lanes where non-bus vehicles are
prohibited. The implementation process is as follows: The camera continuously monitors
the motor vehicles in its surveillance area. When a non-bus vehicle enters the bus lane
and stays for a certain period (1–180s, configurable), the camera determines that a
violation occurs, takes two snapshots by default or three snapshots if specified, combines
the snapshots into a violation image and uploads the image to the platform.
10. Motor vehicles driving non-motor vehicle lanes
ITS cameras can detect motor vehicles driving on non-motor vehicle lanes.
The implementation process is as follows:
The camera continuously monitors the non-motor vehicle lane. When a motor vehicle
enters the lane and stays for a certain period (0–180s, configurable), the camera
determines that a violation occurs, takes 1–3 snapshots (two snapshots by default),
synthesizes the snapshots into a violation image, and sends the image to the platform.
11. Secondary vehicle feature recognition
ITS cameras deployed in checkpoint mode can perform secondary vehicle feature
recognition and take snapshots of in-vehicle violations (such as seat belt infractions and
hand-free device infractions) based on the secondary vehicle feature recognition.
The implementation process is as follows:
When a vehicle passes, the ITS camera takes snapshots and performs secondary vehicle
feature recognition. If it detects that the driver or front passenger is not wearing the seat
belt or the driver uses handheld phones to make calls while driving, the camera
determines that a violation occurs, synthesizes the snapshots into a violation image, and
sends the image to the platform.
12. Emergency lane violation
ITS cameras can detect motor vehicles on emergency lanes.
The implementation process is as follows:
The camera continuously monitors lanes. When a motor vehicle enters the emergency
lane, the camera determines that a violation occurs, takes 1–3 snapshots (two snapshots
by default), synthesizes the snapshots into a violation image, and sends the image to the
platform.
13. Illegal U-turn
ITS cameras can detect motor vehicles that make U-turns illegally.
The implementation process is as follows:
The camera continuously monitors a motor vehicle lane. When a motor vehicle makes a
U-turn on the lane where U-turns are not allowed, the camera determines that a violation
occurs, takes three snapshots, synthesizes the snapshots into a violation image, and sends
the image to the platform.
14. Specified large vehicles that violate prohibitory traffic signs
ITS cameras can detect specified types of large vehicles driving on lanes where the large
vehicles are prohibited.
As shown in the preceding figure, the left image shows the live video image of a box or bullet
camera that is responsible for object detection and behavior analysis. When the detected
object triggers the behavior analysis rule, the coordinates of the box or bullet camera are
converted into the coordinates of the PTZ dome camera, and the PTZ dome camera is linked
to zoom in on the object. The right image shows the live video image of the PTZ dome
camera. After receiving instructions from the box or bullet camera, the PTZ dome camera
performs zoom and rotation. When the object moves, the box or bullet camera continues to
detect and track the object and deliver rotation instructions to the PTZ dome camera. In the
whole process, the box or bullet camera is responsible for object detection and tracking while
the PTZ dome camera is responsible for receiving instructions from the box or bullet camera
to perform zoom and rotation.
The procedure for implementing the smart tracking service is divided into the following steps:
Coordinate calibration for the box or bullet camera and PTZ dome camera:
The calibration process is to calculate the mapping between the coordinates of the box or
bullet camera and the coordinates of the PTZ dome camera. The input coordinates of the
obtained model are the coordinates (x,y) of the box or bullet camera, and the output
coordinates are the PTZ coordinates of the PTZ dome camera. There are two types of
calibration: automatic calibration and manual calibration.
The automatic calibration process is as follows: Adjust the rotation direction and zoom ratio
of the PTZ dome camera until the preview image of the PTZ dome camera coincides with that
of the box or bullet camera. Enable automatic calibration for the box or bullet camera. In this
case, the box or bullet camera will obtain the current frame of the PTZ dome camera and that
of the box or bullet camera, extract features from the two frames, and match the two images
by using the features. After the match, the box or bullet camera calculates the mapping model
between the box or bullet camera coordinates and the PTZ coordinates of the PTZ dome
camera based on the current PTZ information of the PTZ dome camera. The automatic
calibration is then complete.
The manual calibration process is as follows: Select a calibration point on the box or bullet
camera and adjust the PTZ device (zoom and rotation) of the PTZ dome camera to ensure that
the central point of the PTZ dome camera's preview image coincides with the calibration point
of the box (bullet) camera. Then, record the calibration point coordinates of the box or bullet
camera and the PTZ coordinates of the PTZ dome camera as a group of calibration points.
Repeat the preceding step to select 4–12 (six recommended) groups of calibration points.
Deliver the calibration points to the box or bullet camera for manual calibration on the
configuration page. In this case, the box or bullet camera calculates the mapping model
between the box (bullet) camera coordinates and the PTZ coordinates of the PTZ dome
camera based on the calibration points. The manual calibration is then complete.
Tracking policy configuration:
In ePolice or checkpoint scenarios, the camera can collect statistics on the traffic volume,
vehicle type, vehicle direction, average speed, queue length, time headway, space headway,
lane time occupancy, lane space occupancy, and traffic status by lane and period.
2.3 Security
2.3.1 Digital Watermark and Media Security Technologies
With wide application of codec formats that comply with international standards and
ever-developing audio and video processing technologies in the video surveillance industry,
video processing applications are relatively mature. However, re-coding and video data
tampering cases frequently occur. It is an indispensable capability of the video surveillance
solution to ensure data transmission security and data integrity. Focusing on video encoding
universality and considering industry appeals, Huawei embeds digital watermark and media
security technologies into cameras. This effectively prevents audio and video data from being
tampered with, ensures data integrity and authenticity, prevents network cracking, and
enhances media data security.
1. Digital watermark technology
During stream data output through video encoding, watermark information (function as
protection information) related to the stream frame (including the current number of
frame bytes, time, device MAC address, and SN) is stored in the stream data packet
defined by users. The data packet is stored in disks or transmitted with the compressed
stream.
The watermark cannot be perceived, does not affect protected data usage, and will not degrade the image
quality.
At the media playing end, the stream data packet is decoded to obtain the features related
to the watermark, including the current number of frame bytes and time and compare the
features with the preset video encoding information. At the application layer, the system
can check whether the decoding output information is the same as the protection
information of the data packet to determine the stream validity. The system verifies the
watermark information to ensure data security and integrity. If the watermark fails to be
verified, the system generates an alarm.
Media encryption:
Media
video stream +
decryption
private key
Actual network
outage point
Normal upload Network Normal upload
outage
of data detection
of data
point
Network
recovery point
If an exception occurs on the network between a camera and the IVS platform, the camera can
detect the exception and enable the video buffering function. Then, video data transmission is
changed from network transmission to local storage. The video data is stored in the SD card
of the camera. If the network outage lasts for a long period of time and the size of recorded
video exceeds the preset upper limit, data is cyclically overwritten in the SD card.
When the network is recovered, the central platform initiates a recording downloading request,
automatically searches for absent recording time segments based on the predefined recording
plan, and downloads absent recordings from the camera, ensuring video data integrity and
continuity. After the absent recordings are downloaded from the camera, the camera
automatically deletes the buffered data.
To implement the video buffering function, cameras must support this function, have an SD card, and
cooperate with the central platform.
parties. Users can configure required authentication information on web clients. After 802.1X
access authentication is enabled, malicious attacks at the access layer can be prevented. This
provides access authentication security for the surveillance system and prevents unauthorized
users or devices from accessing and attacking the video surveillance system.
Unauthorized device
Surveillance
data center
Unauthorized
user
Network
(802.1X access)
Camera
(802.1X access)
TCP congestion control takes effect only when the transmission protocol is RTP over TCP.
2.3.7 KMC
The key management CBB (KMC) is a Huawei-proprietary security management mechanism
that provides secure storage and lifecycle management capabilities for keys used in products
and services. The KMC solves many security issues of the product, such as using insecure
encryption algorithms, incorrectly using parameters of security algorithms, non-standard or
lack of key management, and lack of key management components that support multi-device
and multi-process.
The KMC supports the following functions:
Hierarchical key management for security isolation
Role-based key management
Key lifecycle management
On-demand key set expansion, and key isolation between different apps
Keys stored in files in active/standby mode, as well as key import and export
As a pioneer in the Safe City video surveillance solution, Huawei SDCs are equipped with
industry-leading encryption algorithms such as AES256, comprehensive log audit functions,
security isolation design, and hierarchical key permission management. This provides robust
security protection for customer data from four aspects: network architecture, algorithm
application, security management, and camera system.
2.3.9 GB 35114
GB 35114 is the Technical Requirements for Information Security of Video Surveillance
Network System for Public Security formulated by the Ministry of Public Security of the
People's Republic of China. It was released in November 2017 and forcibly implemented
from November 2018. The GB 35114 protocol enhances the security capability based on the
GB/T 28181 protocol. The core requirements of this specification include device (client,
network camera, and third-party platform) two-factor authentication, video encryption, and
video signature, aiming to prevent unauthorized device access and protect video content
security (protection against content tampering and leak). In addition, this specification
proposes security requirements such as message authentication (message integrity), video
export, permission management, and log management.
Huawei network cameras support the GB 35114 protocol that provides the following
capabilities:
1. Two-factor authentication based on the digital certificate and management platform
2. Signaling signature, preventing signaling from being tampered with
3. User identity authentication, allowing only authorized users to log in to the system
2.4 Reliability
2.4.1 Three Systems and Three Configurations
Traditionally, if the system configuration of a camera is invalid or lost due to factors such as
power outage during configuration change, engineers need to climb the pole to remove it for
repair, which causes long service interruption and high recovery cost. The systems of Huawei
SDCs have three configurations, two in 1:1 image backup and one read-only default
configuration. When the configuration file being written is damaged because of, for example,
power outage, the other file of the two in 1:1 image backup can automatically recover data.
When the two files are both damaged (with a very low probability), the system enables the
default configuration file to ensure proper system running. Engineers do not need to climb the
pole to remove related cameras for repair.
The first digit indicates the level of protection that the enclosure provides against access to
hazardous parts (for example, electrical conductors and moving parts) and the ingress of solid
foreign objects. A larger digit indicates a higher protection level.
The second digit indicates the level of protection that the enclosure provides against harmful
ingress of water. A larger digit indicates a higher protection level.
IK01 IK02 IK03 IK04 IK05 IK06 IK07 IK08 IK09 IK10
0.14J 0.2J 0.35J 0.5J 0.7J 1J 2J 5J 10J 20J
Lens
2.5.2 OEC
Opto-electronic cascade (OEC) applies to the surveillance site with two box cameras or one
box camera linked with one PTZ dome camera. The following solutions are available in this
scenario:
Solution 1: two optical fibers, one PTZ dome camera with optical fibers, and one box camera
with optical fibers
Gate
Optical fiber
Solution 2: one optical fiber, one switch equipped with an optical module, one common PTZ
dome camera, and one common box camera
Gate
Optical fiber
Switch
High device costs
Large space occupation
Solution 3: OEC (one optical fiber, one camera with an OEC interface, and one common
camera)
Gate
Optical fiber
applicability, as the interface definition meets user habits and allows smooth IPC access.
Huawei SDCs provide two SDK versions, Windows and Linux, for implementing
functions such as parameter setting, user login (including registration keep-alive), PTZ
controls, real-time traffic diversion (including audio and video streams), voice intercom
traffic diversion, and alarm reporting.
3. ONVIF protocol
Huawei SDCs support the ONVIF protocol (ONVIF 2.1, ONVIF 2.2, or ONVIF Profile
S) for interconnecting with network video products of different vendors, maximizing
customers' return on investment (ROI). Using this protocol, Huawei SDCs support a
variety of functions such as device management, device discovery, image configuration,
device input and output service, media configuration, PTZ controls, real-time streaming
media, and event processing.
4. GB/T 28181 protocol
Huawei SDCs support the GB/T 28181 protocol. Users can connect such cameras to
Huawei's video surveillance platform or a third-party platform using the GB/T 28181
protocol to implement services such as live video viewing, PTZ controls, and alarm
reporting.
SVN:20:12:07:02:20:57
Model: eSpace IPC 5811-
WD-Z20
Vendor: Huawei
View
1. Set the IP address obtaining mode of a camera to DHCP. Then, the camera searches for
the DHCP server using DHCP in the network. When receiving the search information
from the camera, the DHCP server sends a response telling the camera that it has
received the search information and provides the DHCP server connection mode. The
camera applies to the server for an IP address according to the connection mode provided
by the DHCP server. The DHCP server automatically allocates an IP address to the
camera using UDP when receiving the request. The camera obtains this IP address and
also the IP address and port number of the surveillance platform.
2. The camera then sends registration information to the platform according to the
platform's IP address and port number, and obtains its own registration ID, which
uniquely identifies the camera. The automatic camera registration is then completed.
Category 5e network cables must be used. Otherwise, the transmission distance will be affected.
personnel can be sent to that camera in a timely manner for troubleshooting. This helps
avoid device damage and ensure surveillance quality.
2. Automatic temperature adjustment (PTZ dome camera)
Huawei PTZ dome cameras support automatic temperature adjustment. When the
temperature inside a camera is higher or lower than the temperature for secure running,
the camera automatically enables the fan or heater through the internal logic circuit for
temperature adjustment. If the temperature adjustment is proved to be ineffective, the
camera automatically generates an alarm and reports the alarm to the surveillance center,
requesting manual maintenance services.
3. SD card fault detection
Huawei SDCs can detect SD card reading and writing faults and generate alarms. When
an SD card is inserted into the card slot, the camera enables SD card mounting detection.
When detecting an SD card reading and writing exception, the camera automatically
generates an alarm and sends the alarm to the surveillance center.
When the SD card being used for data writing and reading encounters exceptions, the
camera also generates an alarm, records the alarm to the alarm log, and reports the alarm
to the surveillance center.
IP IP
Client
Camera Switch
Zoom + Focus
Zoom/Focus control
components to optical paths. The optical image stabilization technology is mostly used in the
consumer electronics field.
Huawei uses efficient stabilization technologies, that is, a combination of the G-sensor
(electronic gyroscope and accelerometer) and a stabilization algorithm, to present a more
effective and accurate stabilization feature.
Working principle:
Electronic image stabilization assisted by the G-sensor
Device jitter
Original
Camera Image data
image
Electronic image stabilization is a technology that completely uses algorithms to estimate the
shaking and implement shaking supplementation. A surveillance camera obtains the estimated
motion vector by comparing the difference between the image corresponding to the current
frame and that corresponding to the previous frame, uses the motion vector as a mapping
matrix, and calculates the mapping result of the current frame. In this way, the image after
shaking correction is obtained.
The effect of electronic image stabilization depends on the accuracy of motion vector
estimation. If the motion vector is inaccurately estimated, no accurate correction value can be
obtained later. In a traditional solution, motion vectors are estimated based only on motion
changes of images. The accuracy and response time are not ideal. In addition, estimation
deviation is caused by local motion and motion blur.
Huawei uses the G-sensor (electronic gyroscope and accelerometer) to replace the motion
estimation part in the stabilization algorithm, which samples gyroscope data in real time to
obtain the current three-axis (X/Y/Z) space posture angle. The horizontal and vertical offsets
of the image relative to the initial state can be obtained based on the posture angle and focal
length. These offsets are used to implement image correction.
positioning allow users to quickly obtain the image they want, improving surveillance
efficiency.
Selected box
Height
Center positioning
3D positioning can be implemented on Huawei PTZ dome cameras only when Huawei IVS software is
used.
buffering. When only single-channel traffic diversion is required and video buffering is
not, the recommended bandwidth is at least 768 kbit/s.
2.5.16 Troubleshooting
When a fault occurs on the live network, fast troubleshooting is required. The troubleshooting
cases, fault diagnosis, and fault information collection functions help users quickly and
accurately locate network faults and improve onsite troubleshooting capabilities
2.5.18 GPS
With the rapid development of intelligent applications and video big data, cameras nowadays
are no longer pure video collection devices. If users need to view, analyze, and perform
operations on traditional cameras in the same area or at specific sites, they need to identify
cameras at each site using On-Screen Display (OSD) and then group cameras manually.
However, for Huawei cameras, users can manually mark the cameras' location information or
locate the cameras using the built-in GPS module (with the BDS integrated). The backend
platform can visualize the camera installation locations on a map. Based on this function,
more applications, such as the multi-camera collaboration, can be developed.
A camera with the built-in GPS module also supports time calibration, which enhances time
precision for recordings and images captured offline.
the posture changes, Huawei cameras are equipped with built-in G-sensors to monitor the
camera posture in real time. An alarm will be generated when the camera posture changes.
Working principle: When a camera is powered on, the system automatically obtains and
records the current posture of the camera and obtains the latest posture in real time. When
detecting that the posture change exceeds the preset threshold and the change persists, the
system generates an alarm.
CU Client Unit
CPE Customer Premises Equipment
PC Personal Computer
PU Peripheral Unit