Polyethylene Glycol,Polyethylene Glygcol,Poly Ethylene Glycol,Polyethylene Glycol Series Jiangsu Zhongluman New Material Technology Co., Ltd , https://www.zjrozhmch.com
Realization and Application of Motion Detection Technology in Digital Monitoring
With the rapid development of social economy and science and technology, people have higher and higher requirements for security technology prevention. From the end of the 1980s to the mid-1990s, with the introduction of various new security concepts, various social departments, industries, and residential communities have established their own closed-circuit television monitoring systems.
However, traditional video surveillance is limited by the technological development level at that time. Most surveillance systems can only perform analog television surveillance on the site. Video information is stored on videotapes. If there are many surveillance sites, video data is required to be stored for a long time. The number will be astonishing, the entire query, retrieval work will become very complicated, the management and operating costs will increase, and there will also be the problem that the video quality becomes worse when the video tape takes a long time or when the number of transcriptions is large. With the development of codec technology, especially the maturity of MPEG4/H264 codec technology, more and more users have adopted digital video surveillance systems to compress multi-channel video in real time and store it on hard disk. The video information is stored in digital form. On the hard disk.
Due to the limited screen size of the computer, when previewing multiple channels of video at the same time, the screen size of each preview is relatively small. This is not conducive to the timely detection of some hidden hidden hidden dangers by the staff; moreover, the intelligence is also getting better in the field of digital security. The more applications that come in, the higher security requirements in certain monitored locations, the need for timely detection and tracking of moving objects, so we need some precise image detection technology to provide automatic alarm and target detection. Motion detection is the earliest field in the application of security intelligence, and its technology development and application prospects have received attention.
Motion detection refers to the ability to recognize image changes in a specified area, detect the presence of moving objects, and avoid interference caused by light changes. But how to extract the change region from the background image from the real-time sequence image, but also consider the effective segmentation of the motion region is very important for the target classification, tracking and other post-processing, because the subsequent processing process only considers the image Pixels of the motion area. However, due to the dynamic changes of the background image, such as weather, light, shadow and chaotic disturbances, motion detection becomes a very difficult task.
2 Motion detection (motion detection) principle
Early motion detection, such as MPEG1, is a comparative analysis of the I-frames generated after encoding. Detecting image changes through the comparison of video frames is a feasible approach. The principle is as follows:
The MPEG1 video stream consists of three types of coded frames: key frames (I-frames), prediction frames (P-frames), and interpolated two-way frames (B-frames). I-frames are encoded according to the JPEG standard, independent of other encoded frames, and are the only accessible frames in the MPEG1 video stream, occurring once every 12 frames. Intercept consecutive I-frames, after decoding operations, continuously stored in the memory buffer in units of frames, and then use the function to convert two consecutive frames in the buffer to a bitmap format and store them in another memory space for comparison. For use, there are many ways to compare. This method is to process the encoded data. However, the current MPEG1/MPEG4 encoding is lossy compression, and there must be false positives and inaccuracies in comparing the original images.
Several commonly used methods
1) Background Subtraction
The background subtraction method is the most commonly used method in current motion detection. It is a technique that uses the difference between the current image and the background image to detect the motion area. It is generally able to provide the most complete feature data, but is particularly sensitive to changes in dynamic scenes, such as illumination and interference from foreign unrelated events. The simplest background model is a time-averaged image, and most researchers are currently working on developing different background models in order to reduce the impact of dynamic scene changes on motion segmentation.
2) Temporal Difference
The time difference (also referred to as adjacent frame difference) method uses pixel-based time difference between two or three adjacent frames in a continuous image sequence and thresholding to extract the motion area in the image. The time difference motion detection method has strong adaptability to the dynamic environment, but generally it can not completely extract all the relevant feature pixel points, and it is easy to generate a hollow phenomenon inside the motion entity.
3) Optical Flow
The optical flow-based motion detection adopts the optical flow characteristics of the moving target over time. For example, Meyer[2] initializes the contour-based tracking algorithm by calculating the displacement vector optical flow field, thereby effectively extracting and tracking the moving target. The advantage of this method is that it can detect independent moving targets on the premise of camera motion. However, most of the optical flow calculation methods are quite complex and have poor noise immunity. If there are no special hardware devices, they cannot be applied to real-time processing of full-frame video streams. For a more detailed discussion of optical flow, see the article by Barron [3] et al.
Of course, there are other methods in motion detection. The motion vector detection method is suitable for a multi-dimensional changing environment. It can eliminate the vibrating pixels in the background and make the moving object in a certain direction more prominently displayed, but the motion vector is detected. The law also cannot accurately segment objects.
3 Implementation of motion detection
Hikvision, as a domestic manufacturer of video and audio codec cards, relies on the powerful technology R&D force of 52nd Research Institute of China Electric Power Group to complete MPEG4/H264 real-time encoding on DSP (Digital Signal Processor). The SDK interface provides effective motion detection and analysis capabilities. The process is as follows:
★ signal input processing module: standard analog video signal (CVBS color or black and white) is the luminance signal and the chroma signal are stacked together through the common frequency, need to be decoded by the A/D chip (such as philips7113), convert the analog signal into The digital signal, which generates the standard ITU 656 YUV format digital signal, is sent to the DSP and memory on the encoding card in frame units.
★ ICP (Image Coprocessor) processing module: YUV data in the DSP plus OSD (character time superimposed) and LOGO (bitmap), composite sent to the memory via the PCI bus for real-time video preview The composite data is also sent to the memory of the encoding card for encoding.
★ ENCODER (encoding) module: The YUV data in the memory of the encoding card is sent to the MPEG4/H264 encoder, which generates a compressed bit stream and sends it to the host memory for recording or network transmission.
★ MOTIONDETECT processing module: Processes YUV data in frame units in the code card memory.
Currently, we use a frame difference algorithm that combines background and time difference. The scene change is obtained by calculating the pixel difference between two frames with a certain time interval. Mainly divided into the following steps:
1) Set the parameters such as the motion detection area:
The user can set 1-99 valid rectangles through the functions in the SDK, and can also set two kinds of fast and slow motion detection states. Fast detection is to perform differential operations on two frames of data every two frames. Slow detection is to perform differential operations on two frames of data that are separated by more than 12 frames.
2) Start motion detection function:
Because the data after A/D conversion is the standard ITU 656 YUV 4:2:2 format, and the human eye is the most sensitive to luminance, in order to simplify the algorithm and improve the efficiency, the luminance (Y) value is directly processed. For each pixel point (x,y) in a detection area, the difference in the luminance (Y) between T and Tn is Mx,y(T)=||Yx,y(T)-Yx,y ( Tn)||,IF ||Mx,y(T) - Mx,y(Tn)|| Ta THEN L=TRUE, obtain the regional differential coefficient IMsum=ΣL.
Actually determining whether to alarm or not, can be determined by setting the IMsum value of the entire detection area.
Alarm = True, IF||ΣIMsum( ) || ≥Tb
False, ELSE
Ta, Tb is an appropriate threshold amount.
In the CIF format, the resolution of the entire screen is 352*288 (PAL). The entire detection area is divided by the macroblock size of 16*16 pixels. Pixels within the macroblock are point-by-point from left to right and from top to bottom. The difference operation is performed and the macroblock differential coefficient is obtained. The entire detection area is again scanned from left to right by 16*16 macroblocks, and from top to bottom, the differential coefficients of the entire area are calculated.
3) Return motion detection result
If the differential coefficient of the entire area is greater than the set threshold, an alarm state is set and the macroblock differential coefficient of each detection area is returned in real time. According to the two fast and slow detection states set in advance, the screen is continuously analyzed and the result is returned until the motion detection is stopped.
If the differential coefficient of the entire area is less than the set threshold, the alarm state is reset.
This motion detection based on the frame difference algorithm is completely independent of coding and can be flexibly started and stopped. Achieving "moving the record, not moving does not record." With other interface functions, pre-recording function can also be realized. That is to say, only preview and motion detection are performed under normal conditions. The encoded data is not written into the file, but is temporarily written into a FIFO buffer. Once a motion detection alarm occurs, You can first write the data of the buffer before the alarm into the file, and then write the encoded data to the file in real time. After the alarm is released, the file is stopped for a period of time before it is written to the write buffer state. Realize the whole process video of motion detection alarm. In this way, the entire alarm event can be completely acquired, and the system resources can be saved. In the same storage space, the time for saving the video can be greatly extended.
4 Motion Assessment (Motion Detection) Technology Assessment
It is not easy to evaluate the performance of motion detection technology, especially when quantitative analysis is required, a standard video sequence for comparison and research must be provided. It should include sudden scene changes, camera movements, and light and shade transformations. effect. The detection scheme can be evaluated with a variety of parameters such as the success rate of detection, the failure rate of detection, and the like. In a practical application environment, a better monitoring effect can be obtained by adjusting the threshold value for indoor general environment and outdoor environment.
It can also be classified according to the methods used to implement the functions. It is mainly to perform some qualitative analysis on the software and hardware implementation methods.
The hardware is used to implement the monitoring function, which does not occupy the CPU and has a faster processing speed. Therefore, some more complicated algorithms can be used to obtain more accurate monitoring results and have good real-time performance. For example, some cameras have built-in VMD (Video Motion Detector video motion detector) circuits that can be used as alarm probes. The detection circuit will firstly store the static image, and then, if it is found that the amount of change of the screen exceeds the preset value, the system will send out an alarm signal to alert the security personnel or start the video recorder. However, hardware implementation also means higher costs, and once the system proposes newer and higher requirements for the dynamic monitoring function, the original hardware system can only be discarded, and new hardware must be purchased, causing waste.
With the monitoring function implemented by software, if the host computer's CPU is used to complete the numerical calculation, the algorithm can not be too complicated, and the calculation amount should not be too large, otherwise it will affect the realization of other monitoring system functions (such as display, video, etc.). If the algorithm is downloaded to the DSP, it can solve this problem. First of all, its function expansion is very easy. The optimization of the algorithm will not cause unnecessary waste. It can generate new microcode and download it to the DSP, and it can improve performance. According to the different needs of users to provide some personalized combination of features.
We believe that using DSP plus software to implement the system's dynamic monitoring function is a more long-term vision, and it is also the only way for the development of motion detection technology. In fact, we completed the MPEG4/H264 CIF/2CIF encoding and motion detection function on Philips' Trimedia 1300 chip, and completed the MPEG4/H264 4CIF/2CIF/CIF encoding plus motion detection function on TI's DM642 chip.
references:
1. Wang Liang et al. "Analysis of Visual Analysis of Human Movement"
2. Meyer D, Denzler J and Niemann H. Model based extraction of articulated objects in image sequences for
Gait analysis. In: Proc IEEE International Conference on Image Processing, Santa Barbara, California 1997,
78-81.
3.Barron J, Fleet D and Beauchemin S. Performance of optical flow techniques. International Journal of
Computer Vision, 1994, 12 (1): 42-77.