US20040100563A1 - Video tracking system and method - Google Patents
Video tracking system and method Download PDFInfo
- Publication number
- US20040100563A1 US20040100563A1 US10/306,509 US30650902A US2004100563A1 US 20040100563 A1 US20040100563 A1 US 20040100563A1 US 30650902 A US30650902 A US 30650902A US 2004100563 A1 US2004100563 A1 US 2004100563A1
- Authority
- US
- United States
- Prior art keywords
- camera
- adjustment
- target object
- rate
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
Definitions
- the present invention relates to a video camera system for tracking a moving object.
- Movable cameras which may pan, tilt and/or zoom may also be used to track objects.
- the use of a PTZ (pan, tilt, zoom) camera system will typically reduce the number of cameras required for a given surveillance site and also thereby reduce the number and cost of the video feeds and system integration hardware such as multiplexers and switchers associated therewith.
- each discrete camera movement occurs at the fastest camera movement speeds available wherein each of the panning movements will be conducted at a common pan rate, each of the tilting movements will be conducted at a common tilt rate and each of the zooming movements, i.e., adjusting the focal length of the camera, will be conducted at a common zoom rate.
- the resulting series of discrete camera movements typically leads to a video image which is “jumpy” in comparison to a video image produced by the manual tracking of a target object by a skilled human operating a joystick or other camera control.
- the present invention provides an automated video tracking system having a movable camera wherein the automatic adjustment of the camera when tracking a target object may be done continuously and at various speeds to provide a video image with relatively smooth transitional movements during the tracking of the target object.
- the invention comprises, in one form thereof, a video tracking system which includes a video camera having a field of view wherein the camera is selectively adjustable and adjustment of the camera varies the field of view of the camera. Also included is at least one processor which is operably coupled to the camera. The processor receives video images acquired by the camera and selectively adjusts the camera. The processor is programmed to detect a moving target object in the video images and adjust the camera to track the target object wherein the processor adjusts the camera at a plurality of varied adjustment rates.
- the invention comprises, in another form thereof, a video tracking system including a video camera having a field of view wherein the camera is selectively adjustable and adjustment of the camera varies the field of view of the camera. Also included in the system is at least one processor which is operably coupled to the camera. The processor receives video images acquired by the camera and selectively adjusts the camera. The processor is programmed to detect a moving target object in the video images and estimate a target value wherein the target value is a function of a property of the target object. The property may be the velocity of the target object. The processor adjusts the camera at a selected adjustment rate which is a function of the target value.
- such systems may include a processor which selects the adjustment rate of the camera as a function of at least one property of the target object.
- the at least one property of the target object may include the velocity of the target object.
- the camera may be selectively adjustable at a variable rate in adjusting at least one of a panning orientation of the camera and a tilt orientation of the camera.
- the processor may also be programmed to select the adjustment rate of the camera based upon analysis of a first image and a second image wherein the first image is acquired by the camera adjusted to define a first field of view and the second image is acquired by the camera adjusted to define a second field of view.
- the first and second fields of view may be partially overlapping and the determination of the selected adjustment rate by the processor may include identifying and aligning at least one common feature represented in each of the first and second images.
- the camera may also define a third field of view as the camera is being adjusted at the selected adjustment rate with a third image being acquired by the camera when it defines the third field of view and wherein the first, second and third images are consecutively analyzed by the processor.
- the camera may have a selectively adjustable focal length and the processor may select the focal length of the camera as a function of the distance of the target object from the camera.
- the adjustment of the camera may include selective panning movement of the camera wherein the panning movement defines an x-axis, selective tilting movement of the camera wherein the tilting movement defines a y-axis, and selective focal length adjustment of the camera wherein adjustment of the focal length defines a z-axis with the x, y and z axes being oriented mutually perpendicular.
- the processor may adjust the camera at a selected panning rate which is a function of the velocity of said target object along the x-axis and at a selected tilting rate which is a function of the velocity of the target object along the y-axis.
- the camera may also be adjusted at a first selected adjustment rate until the processor selects a second adjustment rate and communicates the second adjustment rate to the camera.
- the tracking system may also include a display device and an input device operably coupled to said system wherein an operator may view the video images on the display device and input commands or data into the system through the input device.
- the display device and input device may be positioned remotely from said camera.
- the invention comprises, in yet another form thereof, a video tracking system including a video camera having a field of view wherein the camera is selectively adjustable and adjustment of the camera varies the field of view of the camera.
- the system also includes at least one processor operably coupled to the camera.
- the processor receives video images acquired by the camera and selectively adjusts the camera.
- the processor is programmed to detect a moving target object in the video images and adjust the camera and track the target object.
- the processor communicates a plurality of commands to the camera and the camera is continuously and variably adjustable in accordance with the commands without intervening stationary intervals.
- the camera of such a system may be selectively adjustable at a variable rate in adjusting at least one, or each, of a panning orientation of the camera and a tilt orientation of the camera.
- the camera may acquire images for analysis by the processor while being adjusted and the continuous and variable adjustment of the camera includes varying either a direction of adjustment or a rate of adjustment.
- the commands may involve a first command which adjusts the camera at a selected rate and direction until a second command is received by the camera.
- the invention comprises, in still another form thereof, a video tracking system including a video camera having a field of view wherein the camera is selectively adjustable and adjustment of the camera varies the field of view of the camera.
- the system also includes at least one processor operably coupled to the camera wherein the processor receives video images acquired by the camera and selectively adjusts the camera.
- the processor is programmed to detect a moving target object in the video images and adjust the camera and track the target object.
- the processor can consecutively analyze first, second and third images acquired by the camera wherein each of the images records a different field of view.
- the processor communicates to the camera a first command selectively adjusting the camera and a second command selectively adjusting the camera.
- the camera is adjusted in accordance with the first command during at least a portion of a first time interval between acquisition of the first and second images.
- the camera is adjusted in accordance with the second command during at least a portion of a second time interval between acquisition of the second and third images.
- the camera is continuously adjusted between acquisition of the first image and the third image.
- the invention comprises, in another form thereof, a method of tracking a target object with a video camera.
- the method includes providing a video camera which has a field of view and is selectively adjustable wherein adjustment of the camera varies the field of view of the camera.
- the method also includes adjusting the camera at a selectively variable adjustment rate to track a target object.
- the adjustment rate may be selected as a function of at least one property of the target object.
- the invention comprises, in yet another form thereof a method of tracking a target object with a video camera.
- the method includes providing a video camera which has a field of view and is selectively adjustable wherein adjustment of the camera varies the field of view of the camera.
- the method also includes detecting a target object in images acquired by said camera, estimating a target value which is a function of at least one property of the target object and adjusting the camera at a selectively variable rate wherein the adjustment rate of the camera rate is selected as a function of the target value.
- the at least one property of the target object may include the velocity of the target object.
- the adjustment rate may be selected based upon analysis of a first image and a second image wherein the first image is acquired by the camera when adjusted to define a first field of view and the second image is acquired by the camera when adjusted to define a second field of view.
- the first and second fields of view may be partially overlapping and the determination of the adjustment rate may include identifying and aligning at least one common feature represented in each of the first and second images.
- the adjusting of the camera at a selectively variable adjustment rate may include adjusting at least one, or each, of a panning orientation of the camera and a tilt orientation of the camera and the selected variable adjustment rates may be selected as a function of the velocity of the target object.
- the determination of the adjustment rates may also involve the use of a proportionality factor which is a function of the real world distance of the target object from the camera.
- the adjustment of the camera may also include adjusting the camera at a first selected adjustment rate until a second selected adjustment rate is communicated to the camera.
- the invention comprises, in another form thereof, a method of tracking a target object with a video camera.
- the method includes providing a video camera which has a field of view and is selectively adjustable wherein adjustment of the camera varies the field of view of the camera.
- the method also includes adjusting the camera to track a target object wherein the adjustment of the camera includes selectively and variably adjusting at least one adjustment parameter and wherein the camera is continuously adjustable during the selective and variable adjustment of the at least one adjustment parameter.
- the selective and variable adjustment of at least one adjustment parameter of the camera may include the adjustment of at least one, or each, of a panning orientation of said camera and a tilt orientation of said camera.
- the adjustment of such parameters may be selective and variable.
- the selective and variable adjustment of such parameters may include the varying of either the direction of adjustment or the rate of adjustment and the rate of adjustment may be selected as a function of the velocity of the target object.
- the invention comprises, in another form thereof, a method of tracking a target object with a video camera.
- the method includes providing a video camera which has a field of view and is selectively adjustable wherein adjustment of the camera varies the field of view of the camera.
- the method also includes detecting a target object in images acquired by the camera and acquiring first, second and third images wherein each of the first, second and third images record a different field of view.
- the method also includes communicating a first command to the camera selectively adjusting the camera and communicating a second command to the camera selectively adjusting the camera.
- step of continuously adjusting the camera between acquisition of the first image and acquisition of the third image wherein the camera is adjusted in accordance with the first command during at least a portion of a first time interval between acquisition of the first image and acquisition of the second image and the camera is adjusted in accordance with the second command during at least a portion of a second time interval between acquisition of the second image and acquisition of the third image.
- the first and second commands may selectively adjust at least one, or each, of a panning orientation of the camera and a tilt orientation of the camera.
- the adjustment of such parameters may be at a selectively variable adjustment rate and the rates may be selected as a function of the velocity of the target object.
- the invention comprises, in yet another form thereof, a video tracking system having a video camera with a selectively adjustable focal length. Also included is at least one processor operably coupled to said camera wherein the processor receives video images acquired by the camera and selectively adjusts the focal length of the camera.
- the processor is programmed to detect a moving target object in the video images and adjust the focal length of the camera as a function of the distance of the target object from the camera.
- the camera of the system may also have a selectively adjustable panning orientation and a selectively adjustable tilting orientation wherein the processor adjusts the panning orientation and the tilting orientation to maintain the target object centered in the video images and selectively adjusts the focal length of the camera as a function of the tilt angle.
- the invention comprises, in still another form thereof, a method of automatically tracking a target object with a video camera.
- the method includes providing a video camera having a selectively adjustable focal length and adjusting the focal length of the camera as a function of the distance of the target object from the camera.
- the camera used with such a method may also have a selectively adjustable panning orientation and a selectively adjustable tilting orientation wherein tracking the object involves adjusting the panning and tilting orientation of the camera and selectively adjusting the focal length of the camera as a function of the tilt angle of camera.
- An advantage of the present invention is that it provides video images which reflect relatively fluid transitional camera movements during the tracking of the target object and which do not “jump” from point to point when tracking the target object.
- the resulting video is typically regarded as more pleasant to view and less distracting to human operators who are viewing the video to observe the behavior of the target object.
- Another advantage of the present invention is that it allows for images acquired for automatic tracking purposes to be obtained while the camera is in motion and thus does not require the camera to rest in a stationary position for image acquisition during the tracking of a target object.
- Yet another advantage of the present invention is that it allows the system to continue tracking a target object while a human operator manually repositions the camera because the tracking system may utilize a series of images which do not have a common field of view to track the target object.
- Still another advantage of the present invention is that it may be used with conventional pan, tilt, zoom (PTZ) cameras and, thus, facilitates the retrofitting and upgrading of existing installations having such conventional PTZ cameras.
- PTZ pan, tilt, zoom
- FIG. 1 is a schematic view of a video surveillance system in accordance with the present invention.
- FIG. 2 is a schematic view of the automated tracking unit.
- FIG. 3 is a flowchart representing the operation of the video surveillance system.
- FIG. 4 is a flow chart representing the different status levels of the tracking unit.
- FIG. 5 is a flow chart representing the reacquisition subroutine which is used when the target object is lost.
- System 20 includes a camera 22 which is located within a partially spherical enclosure 24 .
- Enclosure 24 is tinted to allow the camera to acquire images of the environment outside of enclosure 24 and simultaneously prevent individuals in the environment being observed by camera 22 from determining the orientation of camera 22 .
- Camera 22 includes a controller and motors which provide for the panning, tilting and adjustment of the focal length of camera 22 . Panning movement of camera 22 is represented by arrow 26 , tilting movement of camera 22 is represented by arrow 28 and the changing of the focal length of the lens 23 of camera 22 , i.e., zooming, is represented by arrow 30 .
- camera 22 and enclosure 24 are a Phillips AutoDome® Camera Systems brand camera system, such as the G3 Basic AutoDome® camera and enclosure, which are available from Bosch Security Systems, Inc. formerly Phillips Communication, Security & Imaging, Inc. having a place of business in Lancaster, Pa.
- a camera suited for use with present invention is described by Sergeant et al. in U.S. Pat. No. 5,627,616 entitled Surveillance Camera System which is hereby incorporated herein by reference.
- System 20 also includes a head end unit 32 .
- Head end unit 32 may include a video switcher or a video multiplexer (not shown).
- the head end unit may include an Allegiant brand video switcher available from Bosch Security Systems, Inc. formerly Phillips Communication, Security & Imaging, Inc. of Lancaster, Pa. such as a LTC 8500 Series Allegiant Video Switcher which provides inputs for up to 64 cameras and may also be provided with eight independent keyboards and 8 monitors.
- Head end unit 32 includes a keyboard 34 and joystick 36 for operator input and a display device 38 for viewing by the operator.
- a 24 volt a/c power source is provided to power both camera 22 and an automated tracking unit 50 .
- Illustrated system 20 is a single camera application, however, the present invention may be used within a larger surveillance system having additional cameras which may be either stationary or moveable cameras or some combination thereof to provide coverage of a larger or more complex surveillance area.
- One or more VCRs may also be connected to head end unit 32 to provide for the recording of the video images captured by camera 22 and other cameras in the system.
- tracking unit 50 receives a video feed from camera 22 via video line 44 and video line 45 is used to communicate video images to head end unit 32 .
- video lines 44 , 45 are coaxial, 75 ohm, 1 Vp-p and include BNC connectors for engagement with tracking unit 50 .
- the video images provided by camera 22 are analog and may conform to either NTSC or PAL standards.
- a MOFSET based circuit provides a video input buffer 56 and video decoder 58 performs video decoding and passes the digitized video images to processor 60 .
- video input is no greater than 1 Vp-p and if the video signal exceeds 1 Vp-p it will be clipped to 1 Vp-p.
- Video processing is performed by processor 60 running software which is described in greater detail below.
- Processor 60 may be a TriMedia TM-1300 programmable media processor available from Phillips Electronics North America Corporation.
- processor 60 loads a bootloader program from serial EEPROM 62 .
- the boot program then copies the application code from flash memory 64 to SDRAM 66 for execution.
- flash memory 64 provide 1 megabyte of memory and SDRAM 66 provides 8 megabytes of memory. Since the application code from flash memory 64 is loaded on SDRAM 66 upon start up, SDRAM is left with approximately 7 megabytes of memory for video frame storage.
- a video data bus and 12 C bus connects processor 60 with video decoder 58
- a 12C bus connects processor 60 with EEPROM 62
- a XIO bus connects processor 60 with flash memory 64
- a SDRAM bus connects processor 60 with SDRAM 66
- a XIO bus connects processor 60 with UART 68
- UART 68 is used for serial communications and general purpose input/output.
- UART 68 has a 16 character FIFO buffer, a 6 bit input port and an 8-bit output port that is used to drive status LED 70 , error LED 72 and output relay 74 through the use of small signal transistors.
- Relay line 49 communicates the status of double pole, single throw relay 74 to head end unit 32 .
- a RS-232 level convertor 76 provides communication between UART 68 and RS-232 serial line 48 .
- the characteristics of RS-232 line 48 and the communications conveyed thereby in the illustrated embodiment are a 3 wire connection, 19200 baud, 8 data bits, no parity, 1 stop bit and no handshaking.
- tracking unit 50 the only commands conveyed to tracking unit 50 which are input by a human operator are on/off commands. Such on/off commands and other serial communications between head unit 34 and tracking unit 50 are conveyed by bi-phase line 46 from head unit 34 to camera 22 and to tracking unit 50 from camera 22 via RS-232 line 48 .
- tracking unit 50 is provided with a sheet metal housing and mounted proximate camera 22 .
- Alternative hardware architecture may also be employed with tracking unit 50 . Such hardware should be capable of running the software described below and processing at least approximately 5 frames per second for best results.
- Tracking unit 50 performs several functions, it controls video decoder 58 and captures video frames acquired by camera 22 ; it registers video frames taken at different times to remove the effects of camera motion; it performs a video content analysis to detect target objects which are in motion within the FOV of camera 22 ; it calculates the relative direction, speed and size of the detected target objects; it sends direction and speed commands to camera 22 ; it performs all serial communications associated with the above functions; and it controls the operation of the status indicators 70 , 72 and relay 74 .
- the first step involves initializing camera 22 and positioning camera 22 to watching for a person or moving object to enter the FOV of camera 22 by taking repeated images as 24-bit YUV color images as either NTSC or PAL CIF resolution images.
- camera 22 may be moved through a predefined “tour” of the surveillance area after initialization and watch for a person or other moving object to enter the FOV of camera 22 as camera 22 searches the surveillance area.
- two images or frames acquired by camera 22 for analysis will be labeled:
- camera 22 is continually acquiring new images and the computational analysis performed by processor 60 to compare the current image with a reference image takes longer than the time interval between the individual images acquired by camera 22 .
- processor 60 completes its analysis, it will grab a new image for analysis.
- the time interval between two images which are consecutively grabbed by processor 60 is assumed to be constant by illustrated tracking unit 50 . Although the time interval between two consecutively grabbed images may differ slightly, the variations are considered sufficiently small and the processing efficiencies achieved by this assumption to be sufficiently great to justify this assumption.
- the term consecutive images refers to images which are consecutively grabbed by processor 60 for analysis as opposed to images which are consecutively acquired by camera 22 .
- a QCIF resolution sub-sample i.e., an image having a quarter of the resolution of the NTSC or PAL CIF resolution image
- the sub-sample groups adjacent pixels together to define an average value for the grouped pixels.
- the purpose of the sub-sampling process is to reduce the time consumed by motion detection.
- a second sub-sample of the first sub-sample (resulting in images having ⁇ fraction (1/16) ⁇ the resolution of the original CIF resolution images) may also be taken to further increase the speed of the motion detection process.
- Such sub-sampling reduces the resolution of the images and can potentially degrade the ability of system to detect the features and targets which are the subjects of interest.
- these sub-sampled images are labeled:
- these subsamples may be labeled 1 I 1 and 1 I 2 .
- the camera may be stationary and monitoring a specific location for a moving target object.
- System 20 looks for a moving target object by computing the image difference between the two most current images every time a new frame is grabbed by processor 60 .
- the image difference is calculated by taking the absolute value of the difference between associated pixels of each image.
- images I 1 and I 2 are aligned, either because camera 22 took each image with the same FOV or because one of the images was mapped to the second image, the image difference, ⁇ , is calculated in accordance with the following equation:
- a histogram of these differences is then calculated. If there is a moving target in the two images, the histogram will usually have two peaks associated with it. The largest peak will typically be centered around zero and corresponds to the static regions of the image. The second major peak represents the pixels where changes in image intensity are high and corresponds to the moving areas within the image, i.e., a moving target object. The pixels associated with the second peak can be considered as outliers to the original Gaussian distribution. Since they will typically constitute less than 50% of the total number of pixels in the illustrated embodiment, they are detected using the estimation technique Least Median of Squares.
- a point of interest (POI) corresponding to the centroid of the moving target object is then identified.
- the Sobel edge detection masks look for edges in both the horizontal and vertical directions and then combines this information into a single metric as is known in the art. More specifically, at each pixel both the Sobel X and Sobel Y operator is used to generate a gradient value for that pixel. They are labeled gx and gy respectively.
- the edge magnitude is then calculated by equation (1):
- EdgeMagnitude ⁇ square root ⁇ square root over ( gx 2 gy 2 ) ⁇ (1)
- the edge of the moving target object will have large edge magnitude values and these values are used to define the edges of the target object.
- the centroid of the target object or area of motion is found by using the median and sigma values of the areas of detected motion.
- the centroid which is the point of interest or POI, is then found in both frames and its image position coordinates stored as (x(0), y(0), and x(1), y(1)).
- Three related coordinate systems may be used to describe the position of the POI, its real world coordinates (X, Y, Z) corresponding to coordinate system 21 shown in FIG. 1, its image projection coordinates (x, y) and its camera coordinates ( ⁇ , ⁇ , k) which correspond to the camera pan angle, camera tilt angle and the linear distance to the POI.
- the two positions of the POI captured by the two images allow for the determination of the 3-D position of the POI in both frames as well as the relative velocity of the POI during the time interval between the two frames.
- a simplified representation of the moving person or target object in the form of the 2-D location in the image is used in this determination process.
- Tracking unit 50 does not require the two images which are used to determine the motion of the POI to be taken with the camera having the same pan, tilt and focal length settings for each image. Instead, tracking unit 50 maps or aligns one of the images with the other image and then determines the relative velocity and direction of movement of the POI. Two alternative methods of determining the velocity and direction of the POI motion are described below. The first method described below involves the use of a rotation matrix R while the second method uses a homography matrix determined by matching and aligning common stationary features which are found in each of the two images being analyzed.
- the position of the POI in the second image, (X(1), Y(1)), can be computed in a similar manner, and the real world velocity of the target object in the x and y directions, X′ and Y′ respectively, can be found by:
- the time interval between consecutive images grabbed by processor 60 will be substantially constant as discussed above and, thus, the distance traveled by the target object during all such constant time intervals is directly proportional to the velocity of the target object and may be used as a proxy for the average velocity of the target object during the time interval between the acquisition of the two images.
- the sign of the velocity values is indicative of the direction of motion of the POI.
- the actual velocity may be calculated and/or images acquired at more varied time intervals may be used. With this knowledge of the velocity and direction of motion of the POI, the pan and tilt velocity of camera 22 can be controlled to keep the target object centered within the FOV of camera 22 .
- camera control also includes adjusting the focal length based upon the calculated distance between camera 22 and the centroid of the target object, i.e., the POI.
- the destination focal length is assumed to be proportional to the distance between the POI and the camera, this distance, i.e., D(k), is found by the following equation:
- P w (k) represents the three dimensional location of the point in the world coordinate system
- X(k) is the distance of the POI from the focal point of the camera in the X direction in the real world
- Y(k) is the distance of the POI from the focal point of the camera in the Y direction in the real world.
- Z is the current focal length of the camera, i.e., the distance between the camera and the focal plane defined by the current zoom setting.
- f(k) is the focal length of the camera at time step k.
- X c , Y c and Z c are the current real world coordinates of the POI.
- x cn and y cn are the horizontal and vertical distances respectively of between the center of the image and the current image coordinates of the POI.
- x d and y d are the destination image coordinates of the POI.
- x dn and Y dn are the respective horizontal and vertical distances separating the two points (x 0 , y 0 ) from (x d , y d ).
- x cn and y cn are the camera coordinate equivalents of x dn and y dn .
- the angles of rotation can then be found by iteratively solving this equation.
- the angles determined by this process represent the movement of the target object between the two consecutive images, I 1 and I 2 , previously analyzed As discussed above, the time interval between two such consecutive images is a substantially constant value and thus the angles determined by this process are target values which are a function of the velocity of the target object in the time interval between the acquisition of the two images.
- the determined angles are also a function of the original location of the target object relative to the camera, the acceleration of the object and the previous orientation of the camera.
- An alternative method of determining a target value which may be used in the control of camera 22 to track the target object and which is representative of a property of the target object involves detecting corners in images I 1 and I 2 . Corners are image points that have an intensity which significantly differs from neighboring points. Various methods of identifying and matching such corners from two images are known in the art.
- the MIC corner detection method uses a corner response function (CRF) that gives a numerical value for the corner strength at a given pixel location.
- CRF corner response function
- the CRF is computed over the image and corners are detected as points where the CRF achieves a local maximum.
- the CRF is computed using the following equation:
- R is the CRF value
- r A is the horizontal intensity variation
- r B is the vertical intensity variation.
- the MIC method uses a three step process wherein the first step involves computing the CRF for each pixel in a low resolution image. Pixels having a CRF above a first threshold T 1 are identified as potential corners. This initial step will efficiently rule out a significant area of the image as non-corners because the low resolution of the image limits the number of pixels which require the computation of the CRF.
- the second step involves computing the CRF for the potential corner pixels using the full resolution image. If the resulting CRF is below a second threshold, T 2 , the pixel is not a corner.
- Another interpixel approximation for determining an intensity variation for the pixel may also be computed and compared to a threshold value, e.g., T 2 . If the response is below the threshold value, the pixel is not a corner.
- the third step involves locating pixels having locally maximal CRF values and labeling them corners. Nearby pixels having relatively high CFR values but which are not the local maximal value will not be labeled corners. Lists, PCL 1 and PCL 2 , of the detected corners for images I 1 and I 2 respectively are then compiled and compared. The corners in the two images are compared/matched using a similarity measure such as a normalised cross-correlation (NCC) coefficient as is known in the art.
- NCC normalised cross-correlation
- camera 22 When camera 22 is adjusted between the acquisition of the two images I 1 and I 2 , it is necessary, to detect the target object in the most recently acquired image, to align the images so that the background remains constant and that only objects displaying motion relative to the background are detected.
- the adjustment of camera 22 may take the form of panning movement, tilting movement or adjustment of the focal length of camera 22 .
- Geometric transforms may be used to modify the position of each pixel within the image. Another way to think of this is as the moving of all pixels from one location to a new location based upon the camera motion.
- One such method for transforming a first image to align it with a second image wherein the camera was adjusted between the acquisition of the two images is discussed by Trajkovic in U.S. Pat. App. Pub. No. 2002/0167537 A1 entitled Motion-Based Tracking With Pan-Tilt-Zoom Camera which is hereby incorporated herein by reference.
- Alignment of consecutive images requires translation, scaling and rotation of one image to align it with the previous image(s). Of these three operations translation is the simplest. Warping, a process in which each pixel is subjected to a general user-specified transformation, may be necessary to reduce, expand, or modify an image to a standard size before further processing can be performed. Images produced by such geometric operations are approximations of the original.
- the mapping between the two images, the current I 1 and a reference I 2 images is defined by:
- p and p′ denote the homographic image coordinates of the same world point in the first and second images
- s denotes the scale image (which corresponds to the focal length of the camera)
- Q is the internal camera calibration matrix
- R is the rotation matrix between the two camera locations.
- x ′ m 11 ⁇ x + m 12 ⁇ y + m 13 m 13 ⁇ x + m 32 ⁇ y + m 33 ( 7 ⁇ a )
- y ′ m 21 ⁇ x + m 22 ⁇ y + m 23 m 31 ⁇ x + m 32 ⁇ y + m 33 ( 7 ⁇ b )
- ⁇ m ij ⁇ 3 ⁇ 3 is the homography matrix M that maps (aligns) the first image to the second image.
- Equation (6) assumes that the camera center and the center of rotation are identical, which is typically only approximately true. Additionally, in order to retrieve precise values of camera settings, i.e., pan and tilt values for determining R and zoom values for determining s, the camera must stop which will create unnatural motion and, depending on the system retrieving the camera settings, may take a considerable length of time.
- the exemplary embodiment of the present invention computes the alignment matrix M directly from the images using equations (7a) and (7b) to avoid the necessity of acquiring information on the camera position and calibration.
- the point matches between the two images is performed by first taking a QCIF sub-sample of the two images I 1 and I 2 to obtain:
- the corners are then found in the low resolution images using the MIC corner detector described above.
- the homography matrix is then computed based upon a plurality of corresponding coordinates (x,y) and (x′, y′) in the low resolution image. Corner matching is then performed on the higher resolution image by finding the best corners around positions predicted by the homography matrix calculated using the low resolution images.
- a robust method such as the RANSAC algorithm which is known in the art may be used with the higher resolution images to identify “outlier” corner points which likely correspond to moving objects within the image.
- the “outlier” corner points identified by the RANSAC algorithm are not used in the calculation of the homography matrix using the higher resolution images to avoid the bias which would be introduced by using moving points in the calculation of the homography matrix.
- the higher resolution images are used to the calculate the homography matrix M.
- a translation is a pixel motion in the x or y direction by some number of pixels. Positive translations are in the direction of increasing row or column index: negative ones are the opposite. A translation in the positive direction adds rows or columns to the top or left to the image until the required increase has been achieved.
- Image rotation is performed relative to an origin, defined to be at the center of the motion and specified as an angle. Scaling an image means making it bigger or smaller by a specified factor. The following approximation of equations (7a) and (7b) are used to represent such translation, rotation and scaling:
- s is the scaling (zooming) factor.
- ⁇ is the angle of rotation about the origin
- t x is the translation in the x direction
- t y is the translation in the y direction.
- x′ a 1 x ⁇ a 2 y+t x
- y′ a 2 x+a 1 y+t y
- the two images, I 1 and I 2 can be aligned and the determination of the velocity and direction of the target object motion can be completed.
- camera 22 is controlled in a manner which allows camera 22 to be constantly in motion. If the POI is to the left of the center of the field of view processor 60 communicates a command to camera 22 which instructs camera 22 to pan left at a particular panning velocity or rate of adjustment.
- the panning velocity is determined by the distance the POI is from the center of the image. There is a linear relationship between the selected panning velocity and the distance between the center of the most recently acquired image and the POI in the horizontal or x direction.
- the tilting rate and direction of camera 22 is determined by the vertical distance, i.e., in the y direction, between the POI and the center of the most recently acquired image. Proportionality factors are also applied to account for distance of the target object from the camera.
- the distance of the target object from the camera also influences the desired panning velocity.
- the panning angle will have to be adjusted at a slower rate to track the object the more distant the object is from the camera.
- the distance of the target object from the camera also impacts the desired value of the camera tilt and focal length.
- the tilt angle which places the target object in the center of the image will be determined by the distance of that object from the camera, similarly, to maintain the target object at a given image height and assuming all target objects are the same height, the desired focal length of the camera will be determined by the distance of the target object from the camera.
- the panning and tilting velocity of camera 22 are determined by the following equations:
- X vel is the velocity or rate at which the panning angle is adjusted
- Y vel is the velocity or rate at which the tilting angle is adjusted
- x delta is the distance between the POI and the center of the image in the x direction
- y delta is the distance between the POI and the center of the image in the y direction
- x high and y high are normalization factors; and sin(tilt angle) is the sine of the camera tilt angle (measured with reference to a horizontal plane) and provides a proportionality factor which is used to account for the target object distance from the camera.
- values X vel and Y vel are computed using the distance of the POI from the center of the image and the distance of the target object from the camera and, as described above, the distance of the POI from the center of the image is related to the movement of the target object over a constant time value, thus values X vel and Y vel are a function of several properties of the target object, its position relative to the camera in the real world and the position of the target object centroid within the FOV which is a function of the velocity and acceleration of the target object and thus, values X vel and Y vel are also functions of the velocity and acceleration of the target object.
- a proportionality factor which is a function of the distance of the target object from the camera is used to adjust the selected panning and tilting adjustment rates because this distance impacts the effects of the panning and tilting adjustment of the camera.
- the panning motion of the camera for example, when the target object is distant from the camera only minimal panning movement will be required to track movement of the target object in the x direction and maintain the target in the center of the image. If the target object is closer to the camera, the camera will be required to pan more quickly to track the target object if it were to move at the same speed in the x direction. Similarly, a higher rate of tilting is required to track targets which are closer to the camera than those which are more distant when such targets are moving at the same speed.
- the focal length adjustment rate and direction i.e., how quickly to zoom camera 22 and whether to zoom in or out, is determined using the distance of the target object from the camera.
- the process described above for aligning two images having different scales, i.e., acquired at different focal lengths, allows for system 20 to utilize dynamic zooming, i.e., adjusting the focal length of camera 22 during the tracking of the target object instead of requiring the camera to maintain a constant zoom or focal length value during tracking or for acquiring compared images.
- the largest detected moving object is selected as the target object provided that the size of the target object is larger than a predetermined threshold value, e.g., 10% of the field of view.
- the focal length of camera 22 is adjusted in a manner which attempts to maintain the target object between 10%-70% of the FOV. Tracking of the target may stop if the size of the object falls outside of this range.
- the focal length of camera 22 is adjusted to account for the distance of the target object from the camera with the goal of keeping the target object size relatively constant, e.g., 20% of the FOV, and which facilitates the observation of the target object.
- the desired focal length is determined by first estimating the target distance between the target object and the camera as follows:
- Target Distance Camera Height/Sin(tilt angle)
- R ⁇ L FOV width Number of effective pixels/Number of lines of resolution required to identify an intruder
- Number of effective pixels is 768 (H) for NTSC video images and 752 (H) for PAL video images;
- Number of lines of resolution to identify an intruder is in lines of resolution per foot, in the exemplary embodiment, e.g., 16 lines per foot.
- Desired Focal Length Format*Target Distance ( ft )/ R ⁇ L FOV width
- [0129] Format is the horizontal width in mm of the CCD (charge-coupled device) used by the camera, e.g., 3.6 mm for camera 22 .
- CCD charge-coupled device
- camera 22 is instructed to adjust its focal length setting by changing the focal length to the desired focal length value.
- the focal length adjustment of camera 22 is thus a point-to-point adjustment of the focal length. It would be possible in an alternative embodiment, however, for camera 22 to be commanded to move at a selected adjustment rate which is selected based upon the difference between the current focal length and the desired focal length similar to the manner in which the pan and tilt adjustments are made rather than to simply move to a given zoom setting.
- Camera 22 would then continue to the adjust the focal length at the specified rate (and in the chosen direction, i.e., increasing or decreasing the focal length of the camera) until processor 60 communicated a second command altering the rate or direction of focal length adjustment.
- a second command could be to change the rate of change to 0 which would correspond to a constant focal length value.
- the video content analysis algorithm performs the following functions:
- Tracker Initialization The tracker is initialized to position the camera and wait for a moving target object to enter the camera FOV.
- Corner Detection and Matching Corner features in the background are identified and matched to estimate changes in camera position between acquisition of the images.
- Warping Images are geometrically distorted to align images taken with differing fields of view and detect the moving target object in such images.
- Region Location and Extraction Locating the target object in each new frame involves locating and extracting the image region corresponding to the target object.
- Point of Interest (POI) Computation A simplified representation of the target object and its centroid is located within the two dimensional framework of the image.
- FIG. 3 provides a flow chart which graphically illustrates the general logic of the video content analysis algorithm used by system 20 as described above and which uses the homography matrix approach instead of the rotation matrix approach to identify and track the target object.
- FIG. 3 after turning tracking unit 50 on, it is initialized at step 80 by loading a bootloader program from EEPROM 62 and copying the application code from flash memory 64 to SDRAM 66 for execution.
- Block 82 represents the remaining memory of SDRAM 66 which is available as a ring buffer for storage of video image frames for processing by processor 60 .
- processor 60 determines if the first flag is true. The first flag is true only when no images from camera 22 have been loaded to SDRAM 66 for analysis by processor 60 .
- Block 86 represents the grabbing of two images by processor 60 .
- Processor 60 then proceeds to block 88 where the current tilt value of camera 22 for each of the two images are obtained from the integral controller of camera 22 for later use to calculate the destination focal length.
- block 90 represents the taking of subsamples of the two most recently grabbed images.
- the image difference of the two subsampled images is calculated to determine if any moving objects are present in the images. (If a moving object is found then the intruder tracking functionality of unit 50 is engaged, i.e., ITE Triggering.) If a moving object is present in the images, the centroid of the moving target object is located at block 94 . A corner detection method is then used to detect corner features in the subsampled images and generate lists of such corners at block 96 . Next, at block 98 , the data for images I 1 and I 2 are swapped.
- the swapping of image data is done so that when a new image is grabbed and placed in the buffer after completing the calculations called for in steps 100 - 104 the new image and data associated therewith will overwrite the image and data associated with the older of the two images already present in the buffer.
- the POI is calculated using the highest resolution images if the POI was determined using subsample images at block 94 .
- the destination or desired focal length is then calculated at block 102 .
- the pan and tilt velocity, X vel and Y vel are calculated at block 104 .
- processor 60 communicates a command to camera 22 to adjust the focal length to the desired focal length; to pan at an adjustment rate and direction corresponding to the magnitude and sign of X vel ; and to tilt at an adjustment rate and direction corresponding to the magnitude and sign of Y vel
- the process then returns to block 84 where the first flag will no longer be true and the process will proceed to block 108 where a single new image will be grabbed and overwrite image I 2 in the buffer.
- the tilt value of camera 22 for new image I 2 is then obtained at block 110 from the integral controller of camera 22 for later calculation of the desired focal length.
- the new image is then subsampled at block 112 and corners are detected and a list of such corners created for the subsampled images at block 114 .
- the warping and alignment process described above is then performed at block 116 to align images I 1 and I 2 .
- the image difference of the two aligned images is then calculated to determine if a moving object is included in the images.
- the centroid of the target object is determined at block 120 .
- images I 1 and I 2 and the data associated therewith are swapped as described above with respect to block 98 .
- the size of the detected target object i.e., the Blob_Size, is compared to a threshold value and, if the target object is not large enough, or if no target object has been found in the images, the process returns to block 84 . If the target object is larger than the threshold size, the process continues on to block 100 through 106 where the adjustment parameters of camera 22 are determined and then communicated to camera 22 as described above.
- camera 22 may pan and tilt at different specified velocities, i.e., at selectively variable adjustment rates, and when processor 60 communicates a command to camera 22 , processor 60 instructs camera 22 to pan in a selected direction and at a selected rate, to tilt in a selected direction and at a selected rate, and to change the focal length to a desired focal length.
- processor 60 communicates a command to camera 22
- processor 60 instructs camera 22 to pan in a selected direction and at a selected rate, to tilt in a selected direction and at a selected rate, and to change the focal length to a desired focal length.
- camera 22 will adjust by moving to the specified focal length and panning and tilting in the specified directions and at the specified rates until camera 22 receives a second command instructing it to pan in a new selected direction and at a new selected rate, to tilt in a new selected direction and at a new selected rate, and to change the focal length to a new desired focal length.
- the panning and tilting of camera 22 may also cease prior to receiving the second command if camera 22 has a limited panning or tilting range and reaches the limit of its panning or tilting range.
- camera 22 may be continuously adjusted during the tracking of the target object without stationary intervals separating the receipt and execution of the adjustment commands and thereby provide a stream of video images with relatively smooth transitional movements.
- processor 60 may consecutively analyze a series of images which may all record different FOVs.
- the series of images may include three images consecutively analyzed by processor 60 , i.e., first, second and third images, wherein each image records a different FOV.
- Processor 60 will have communicated a previous command to camera 22 based upon earlier images and camera 22 will be adjusted in accordance with this first command as it analyzes the first and second images, the analysis of the first and second images will result in a second command to camera 22 and camera 22 will be adjusted in accordance with this second command as it analyzes the second and third images to formulate the next adjustment command for camera 22 .
- camera 22 will continue to pan and tilt in accordance with the first command until receipt of the second command. In this manner, camera 22 may be continuously adjusted as it acquires a series of images having different fields of views without requiring stationary intervals for the acquisition of images having common FOVs or separating the execution of adjustment commands.
- the video content analysis algorithm described above assumes that camera 22 is mounted at a known height and works best when the surveillance area and target objects conform to several characteristics. For best results, the target should be 30% to 70% of the image height, have a height to width ratio of no more than 5:1 and move less than 25% of the image width between processed frames at a constant velocity.
- System 20 tracks only one moving target at a time. If multiple targets are within the FOV, system 20 will select the largest target if it is 20% larger than next largest target. If the largest target is not at least 20% larger than next largest target, system 20 may change targets randomly.
- Alternative target object identification methods may also be used to distinguish between moving objects, such as those analyzing the color histogram of the target object.
- System 20 uses background features to detect “corners” and register subsequent images, therefore it may fail in excessively featureless environments or if targets occupy a majority of the FOV and obscure such corner features. Divergence from these assumptions and characteristics is not necessarily fatal to the operation of system 20 and may merely degrade performance of system 20 . These assumptions concerning the illustrated embodiment cover a large subset of video surveillance applications related to restricted areas where people are not supposed to be present. It is also possible for those having ordinary skill in the art to adapt illustrated system 20 to cover additional situations which are not necessarily limited to these assumptions and characteristics.
- tracking unit 50 has three main states: 1) Tracker OFF, 2) Looking for Target and 3) Tracking Target.
- Tracking unit 50 is turned on and off by a human operator inputting commands through an input device such as keyboard 34 or joystick 36 .
- the on/off commands are routed through bi-phase cable 46 to camera 22 and RS-232 line to tracking unit 50 .
- Tracking unit 50 communicates its current status with LED indicators 70 , 72 and relay 74 .
- LED 70 emits light when unit 50 is on and flashes when unit 50 is tracking a target object.
- relay 74 communicates this information to head end unit 34 via relay line 49 .
- LED 72 emits light when unit 50 is turned on but has experienced an error such as the loss of the video signal.
- tracking unit 50 if tracking unit 50 is on, either looking for a target or tracking a target, and a higher priority activity is initiated, tracking unit 50 will turn off or become inactive and after the higher priority activity has ceased and a dwell time has elapsed, i.e., the higher priority activity has timed out, tracking unit 50 will turn back on and begin looking for a target.
- the tracking unit may give up control of camera 22 during human operator and/or camera initiated movement of camera and continue to analyze the images acquired by camera 22 to detect target objects.
- the continued detection of target objects while the camera is under the control of an operator or separate controller is possible because the tracking unit 50 does not require the images used to detect the target object to be acquired while the camera is stationary or for the images to each have the same field of view.
- tracking unit 50 Once tracking unit 50 has detected a target object, it will continuously track the target object until it can no longer locate the target object, for example, the target object may leave the area which is viewable by camera 22 or may be temporarily obscured by other objects in the FOV. When unit 50 first loses the target object it will enter into a reacquisition subroutine. If the target object is reacquired, tracking unit will continue tracking the target object, if the target has not been found before the completion of the reacquisition subroutine, tracking unit 50 will change its status to Looking for Target and control of the camera position will be returned to either the camera controller or the human operator.
- the reacquisition subroutine is graphically illustrated by the flow chart of FIG. 5.
- tracking unit 50 In the reacquire mode, tracking unit 50 first keeps the camera at the last position in which the target was tracked for approximately 10 seconds. If the target is not reacquired, the camera is zoomed out in discrete increments wherein the maximum zoom in capability of the camera corresponds to 100% and no zoom (i.e., no magnifying effect) corresponds to 0%. More specifically, the camera is zoomed out to the next lowest increment of 20% and looks for the target for approximately 10 seconds in this new FOV. The camera continues to zoom out in 20% increments at 10 second intervals until the target is reacquired or the camera reaches its minimum zoom (0%) setting.
- the status of tracking unit 50 is changed to “Looking for Target”, the position of camera 22 returns to a predefine position or “tour” and the positional control of the camera is returned to the operator or the controller embedded within camera 22 .
- system 20 uses a general purpose video processing platform that obtains video and camera control information from a standard PTZ camera.
- This configuration and use of a standard PTZ camera also allows for the retrofitting and upgrading of existing installations having installed PTZ cameras by the installing tracking units 50 and coupling tracking units 50 with the existing PTZ cameras.
- a system which could be upgraded by the addition of one or more tracking units 50 is discussed by Sergeant et al. in U.S. Pat. No. 5,517,236 which is hereby incorporated herein by reference.
- By providing tracking units 50 with a sheet metal housing their mounting on or near a PTZ camera to provide for PTZ control using image processing of the source video is facilitated.
- System 20 thereby provides a stand alone embedded platform which does not require a personal computer-based tracking system.
- system 20 may be used to monitor manufacturing and warehouse facilities and track individuals who enter restricted areas. Head end unit 32 with display 38 and input devices 34 and 36 may be positioned at a location remote from the area being surveyed by camera 22 such as a guard room at another location in the building.
- system 20 includes a method for automatically detecting a target object, the manual selection of a target object by a human operator, such as by the operation of joystick 36 , could also be employed with the present invention. After manual selection of the target object, system 20 would track the target object as described above for target objects identified automatically.
Abstract
A video tracking system and method which includes a video camera having a selectively adjustable panning orientation, tilting orientation and focal length. A processor receives video images acquired by the camera. The processor is programmed to detect target objects in the images and selectively adjust the camera to track the target object. The camera is adjusted at variable rates which are selected as a function of a property, such as the velocity, of the target object. The focal length of the camera is selectively adjusted as a function of the distance of the target object from the camera. The images acquired by the camera are geometrically transformed to align images having different fields of view to facilitate the analysis of the images and thereby allowing the camera to be continuously adjustable for the production of video images having relatively smooth transitional movements.
Description
- 1. Field of the Invention
- The present invention relates to a video camera system for tracking a moving object.
- 2. Description of the Related Art
- There are numerous known video surveillance systems which may be used to track a moving object such as a person or vehicle. Some such systems utilize a fixed camera having a stationary field of view (FOV). To fully cover a given surveillance site with a fixed camera system, however, it will oftentimes be necessary to use a significant number of fixed cameras.
- Movable cameras which may pan, tilt and/or zoom may also be used to track objects. The use of a PTZ (pan, tilt, zoom) camera system will typically reduce the number of cameras required for a given surveillance site and also thereby reduce the number and cost of the video feeds and system integration hardware such as multiplexers and switchers associated therewith.
- Visual surveillance systems will also often rely upon human operators. The use of human operators, however, is subject to several limiting factors such as relatively high hourly costs, susceptibility to fatigue when performing tedious and boring tasks, inability to concentrate on multiple images simultaneously and accidental/intentional human error. To reduce the impact of such human limitations, automated video tracking systems have been used to assist or replace human operators.
- Three primary steps typically employed in automated video tracking systems involve background subtraction, target detection and target tracking. The use of fixed cameras greatly simplifies and speeds the background subtraction and target detection processes. When a PTZ system is employed, the camera is typically repositioned by analyzing the motion of the target object and predicting a future location of the target object. The camera is then adjusted to reposition the estimated future location of the target object in the center of the FOV. The camera may then remain stationary as the target object moves The camera will then be repositioned to once again recenter the target object. Such discrete camera movements are continually repeated to track the target object. Conventionally, each discrete camera movement occurs at the fastest camera movement speeds available wherein each of the panning movements will be conducted at a common pan rate, each of the tilting movements will be conducted at a common tilt rate and each of the zooming movements, i.e., adjusting the focal length of the camera, will be conducted at a common zoom rate. The resulting series of discrete camera movements typically leads to a video image which is “jumpy” in comparison to a video image produced by the manual tracking of a target object by a skilled human operating a joystick or other camera control.
- The present invention provides an automated video tracking system having a movable camera wherein the automatic adjustment of the camera when tracking a target object may be done continuously and at various speeds to provide a video image with relatively smooth transitional movements during the tracking of the target object.
- The invention comprises, in one form thereof, a video tracking system which includes a video camera having a field of view wherein the camera is selectively adjustable and adjustment of the camera varies the field of view of the camera. Also included is at least one processor which is operably coupled to the camera. The processor receives video images acquired by the camera and selectively adjusts the camera. The processor is programmed to detect a moving target object in the video images and adjust the camera to track the target object wherein the processor adjusts the camera at a plurality of varied adjustment rates.
- The invention comprises, in another form thereof, a video tracking system including a video camera having a field of view wherein the camera is selectively adjustable and adjustment of the camera varies the field of view of the camera. Also included in the system is at least one processor which is operably coupled to the camera. The processor receives video images acquired by the camera and selectively adjusts the camera. The processor is programmed to detect a moving target object in the video images and estimate a target value wherein the target value is a function of a property of the target object. The property may be the velocity of the target object. The processor adjusts the camera at a selected adjustment rate which is a function of the target value.
- In alternative embodiments, such systems may include a processor which selects the adjustment rate of the camera as a function of at least one property of the target object. The at least one property of the target object may include the velocity of the target object. The camera may be selectively adjustable at a variable rate in adjusting at least one of a panning orientation of the camera and a tilt orientation of the camera.
- The processor may also be programmed to select the adjustment rate of the camera based upon analysis of a first image and a second image wherein the first image is acquired by the camera adjusted to define a first field of view and the second image is acquired by the camera adjusted to define a second field of view. The first and second fields of view may be partially overlapping and the determination of the selected adjustment rate by the processor may include identifying and aligning at least one common feature represented in each of the first and second images. The camera may also define a third field of view as the camera is being adjusted at the selected adjustment rate with a third image being acquired by the camera when it defines the third field of view and wherein the first, second and third images are consecutively analyzed by the processor. The camera may have a selectively adjustable focal length and the processor may select the focal length of the camera as a function of the distance of the target object from the camera.
- The adjustment of the camera may include selective panning movement of the camera wherein the panning movement defines an x-axis, selective tilting movement of the camera wherein the tilting movement defines a y-axis, and selective focal length adjustment of the camera wherein adjustment of the focal length defines a z-axis with the x, y and z axes being oriented mutually perpendicular. The processor may adjust the camera at a selected panning rate which is a function of the velocity of said target object along the x-axis and at a selected tilting rate which is a function of the velocity of the target object along the y-axis. The camera may also be adjusted at a first selected adjustment rate until the processor selects a second adjustment rate and communicates the second adjustment rate to the camera.
- The tracking system may also include a display device and an input device operably coupled to said system wherein an operator may view the video images on the display device and input commands or data into the system through the input device. The display device and input device may be positioned remotely from said camera.
- The invention comprises, in yet another form thereof, a video tracking system including a video camera having a field of view wherein the camera is selectively adjustable and adjustment of the camera varies the field of view of the camera. The system also includes at least one processor operably coupled to the camera. The processor receives video images acquired by the camera and selectively adjusts the camera. The processor is programmed to detect a moving target object in the video images and adjust the camera and track the target object. During tracking of the target object, the processor communicates a plurality of commands to the camera and the camera is continuously and variably adjustable in accordance with the commands without intervening stationary intervals.
- The camera of such a system may be selectively adjustable at a variable rate in adjusting at least one, or each, of a panning orientation of the camera and a tilt orientation of the camera. The camera may acquire images for analysis by the processor while being adjusted and the continuous and variable adjustment of the camera includes varying either a direction of adjustment or a rate of adjustment. The commands may involve a first command which adjusts the camera at a selected rate and direction until a second command is received by the camera.
- The invention comprises, in still another form thereof, a video tracking system including a video camera having a field of view wherein the camera is selectively adjustable and adjustment of the camera varies the field of view of the camera. The system also includes at least one processor operably coupled to the camera wherein the processor receives video images acquired by the camera and selectively adjusts the camera. The processor is programmed to detect a moving target object in the video images and adjust the camera and track the target object. The processor can consecutively analyze first, second and third images acquired by the camera wherein each of the images records a different field of view. The processor communicates to the camera a first command selectively adjusting the camera and a second command selectively adjusting the camera. The camera is adjusted in accordance with the first command during at least a portion of a first time interval between acquisition of the first and second images. The camera is adjusted in accordance with the second command during at least a portion of a second time interval between acquisition of the second and third images. The camera is continuously adjusted between acquisition of the first image and the third image.
- The invention comprises, in another form thereof, a method of tracking a target object with a video camera. The method includes providing a video camera which has a field of view and is selectively adjustable wherein adjustment of the camera varies the field of view of the camera. The method also includes adjusting the camera at a selectively variable adjustment rate to track a target object. The adjustment rate may be selected as a function of at least one property of the target object.
- The invention comprises, in yet another form thereof a method of tracking a target object with a video camera. The method includes providing a video camera which has a field of view and is selectively adjustable wherein adjustment of the camera varies the field of view of the camera. The method also includes detecting a target object in images acquired by said camera, estimating a target value which is a function of at least one property of the target object and adjusting the camera at a selectively variable rate wherein the adjustment rate of the camera rate is selected as a function of the target value.
- In alternative embodiments of the above-described methods, the at least one property of the target object may include the velocity of the target object. The adjustment rate may be selected based upon analysis of a first image and a second image wherein the first image is acquired by the camera when adjusted to define a first field of view and the second image is acquired by the camera when adjusted to define a second field of view. The first and second fields of view may be partially overlapping and the determination of the adjustment rate may include identifying and aligning at least one common feature represented in each of the first and second images. The adjusting of the camera at a selectively variable adjustment rate may include adjusting at least one, or each, of a panning orientation of the camera and a tilt orientation of the camera and the selected variable adjustment rates may be selected as a function of the velocity of the target object. The determination of the adjustment rates may also involve the use of a proportionality factor which is a function of the real world distance of the target object from the camera. The adjustment of the camera may also include adjusting the camera at a first selected adjustment rate until a second selected adjustment rate is communicated to the camera.
- The invention comprises, in another form thereof, a method of tracking a target object with a video camera. The method includes providing a video camera which has a field of view and is selectively adjustable wherein adjustment of the camera varies the field of view of the camera. The method also includes adjusting the camera to track a target object wherein the adjustment of the camera includes selectively and variably adjusting at least one adjustment parameter and wherein the camera is continuously adjustable during the selective and variable adjustment of the at least one adjustment parameter.
- The selective and variable adjustment of at least one adjustment parameter of the camera may include the adjustment of at least one, or each, of a panning orientation of said camera and a tilt orientation of said camera. The adjustment of such parameters may be selective and variable. The selective and variable adjustment of such parameters may include the varying of either the direction of adjustment or the rate of adjustment and the rate of adjustment may be selected as a function of the velocity of the target object.
- The invention comprises, in another form thereof, a method of tracking a target object with a video camera. The method includes providing a video camera which has a field of view and is selectively adjustable wherein adjustment of the camera varies the field of view of the camera. The method also includes detecting a target object in images acquired by the camera and acquiring first, second and third images wherein each of the first, second and third images record a different field of view. The method also includes communicating a first command to the camera selectively adjusting the camera and communicating a second command to the camera selectively adjusting the camera. Further included is the step of continuously adjusting the camera between acquisition of the first image and acquisition of the third image wherein the camera is adjusted in accordance with the first command during at least a portion of a first time interval between acquisition of the first image and acquisition of the second image and the camera is adjusted in accordance with the second command during at least a portion of a second time interval between acquisition of the second image and acquisition of the third image.
- The first and second commands may selectively adjust at least one, or each, of a panning orientation of the camera and a tilt orientation of the camera. The adjustment of such parameters may be at a selectively variable adjustment rate and the rates may be selected as a function of the velocity of the target object.
- The invention comprises, in yet another form thereof, a video tracking system having a video camera with a selectively adjustable focal length. Also included is at least one processor operably coupled to said camera wherein the processor receives video images acquired by the camera and selectively adjusts the focal length of the camera. The processor is programmed to detect a moving target object in the video images and adjust the focal length of the camera as a function of the distance of the target object from the camera. The camera of the system may also have a selectively adjustable panning orientation and a selectively adjustable tilting orientation wherein the processor adjusts the panning orientation and the tilting orientation to maintain the target object centered in the video images and selectively adjusts the focal length of the camera as a function of the tilt angle.
- The invention comprises, in still another form thereof, a method of automatically tracking a target object with a video camera. The method includes providing a video camera having a selectively adjustable focal length and adjusting the focal length of the camera as a function of the distance of the target object from the camera. The camera used with such a method may also have a selectively adjustable panning orientation and a selectively adjustable tilting orientation wherein tracking the object involves adjusting the panning and tilting orientation of the camera and selectively adjusting the focal length of the camera as a function of the tilt angle of camera.
- An advantage of the present invention is that it provides video images which reflect relatively fluid transitional camera movements during the tracking of the target object and which do not “jump” from point to point when tracking the target object. The resulting video is typically regarded as more pleasant to view and less distracting to human operators who are viewing the video to observe the behavior of the target object.
- Another advantage of the present invention is that it allows for images acquired for automatic tracking purposes to be obtained while the camera is in motion and thus does not require the camera to rest in a stationary position for image acquisition during the tracking of a target object.
- Yet another advantage of the present invention is that it allows the system to continue tracking a target object while a human operator manually repositions the camera because the tracking system may utilize a series of images which do not have a common field of view to track the target object.
- Still another advantage of the present invention is that it may be used with conventional pan, tilt, zoom (PTZ) cameras and, thus, facilitates the retrofitting and upgrading of existing installations having such conventional PTZ cameras.
- The above mentioned and other features and objects of this invention, and the manner of attaining them, will become more apparent and the invention itself will be better understood by reference to the following description of an embodiment of the invention taken in conjunction with the accompanying drawings, wherein:
- FIG. 1 is a schematic view of a video surveillance system in accordance with the present invention.
- FIG. 2 is a schematic view of the automated tracking unit.
- FIG. 3 is a flowchart representing the operation of the video surveillance system.
- FIG. 4 is a flow chart representing the different status levels of the tracking unit.
- FIG. 5 is a flow chart representing the reacquisition subroutine which is used when the target object is lost.
- Corresponding reference characters indicate corresponding parts throughout the several views. Although the exemplification set out herein illustrates an embodiment of the invention, in one form, the embodiment disclosed below is not intended to be exhaustive or to be construed as limiting the scope of the invention to the precise form disclosed.
- In accordance with the present invention, a
video surveillance system 20 is shown in FIG. 1.System 20 includes acamera 22 which is located within a partiallyspherical enclosure 24.Enclosure 24 is tinted to allow the camera to acquire images of the environment outside ofenclosure 24 and simultaneously prevent individuals in the environment being observed bycamera 22 from determining the orientation ofcamera 22.Camera 22 includes a controller and motors which provide for the panning, tilting and adjustment of the focal length ofcamera 22. Panning movement ofcamera 22 is represented byarrow 26, tilting movement ofcamera 22 is represented byarrow 28 and the changing of the focal length of thelens 23 ofcamera 22, i.e., zooming, is represented byarrow 30. As shown with reference to coordinatesystem 21, panning motion may track movement along the x axis, titling motion may track movement along the y-axis and focal length adjustment may be used to track movement along the z-axis. In the illustrated embodiment,camera 22 andenclosure 24 are a Phillips AutoDome® Camera Systems brand camera system, such as the G3 Basic AutoDome® camera and enclosure, which are available from Bosch Security Systems, Inc. formerly Phillips Communication, Security & Imaging, Inc. having a place of business in Lancaster, Pa. A camera suited for use with present invention is described by Sergeant et al. in U.S. Pat. No. 5,627,616 entitled Surveillance Camera System which is hereby incorporated herein by reference. -
System 20 also includes ahead end unit 32.Head end unit 32 may include a video switcher or a video multiplexer (not shown). For example, the head end unit may include an Allegiant brand video switcher available from Bosch Security Systems, Inc. formerly Phillips Communication, Security & Imaging, Inc. of Lancaster, Pa. such as a LTC 8500 Series Allegiant Video Switcher which provides inputs for up to 64 cameras and may also be provided with eight independent keyboards and 8 monitors.Head end unit 32 includes akeyboard 34 andjoystick 36 for operator input and adisplay device 38 for viewing by the operator. A 24 volt a/c power source is provided to power bothcamera 22 and anautomated tracking unit 50. -
Illustrated system 20 is a single camera application, however, the present invention may be used within a larger surveillance system having additional cameras which may be either stationary or moveable cameras or some combination thereof to provide coverage of a larger or more complex surveillance area. One or more VCRs may also be connected to headend unit 32 to provide for the recording of the video images captured bycamera 22 and other cameras in the system. - The hardware architecture of tracking
unit 50 is schematically represented in FIG. 2. Apower line 42 connectspower source 40 toconverter 52 topower tracking unit 50.Tracking unit 50 receives a video feed fromcamera 22 viavideo line 44 andvideo line 45 is used to communicate video images tohead end unit 32. In the illustrated embodiment,video lines tracking unit 50. The video images provided bycamera 22 are analog and may conform to either NTSC or PAL standards. When trackingunit 50 is inactive, i.e., turned off, video images fromcamera 22 pass through trackingunit 50 to headend unit 32 as shown byanalog video line 54. A MOFSET based circuit provides avideo input buffer 56 and video decoder 58 performs video decoding and passes the digitized video images toprocessor 60. In the illustrated embodiment, video input is no greater than 1 Vp-p and if the video signal exceeds 1 Vp-p it will be clipped to 1 Vp-p. Video processing is performed byprocessor 60 running software which is described in greater detail below.Processor 60 may be a TriMedia TM-1300 programmable media processor available from Phillips Electronics North America Corporation. At start up,processor 60 loads a bootloader program fromserial EEPROM 62. The boot program then copies the application code fromflash memory 64 toSDRAM 66 for execution. In the illustrated embodiment,flash memory 64 provide 1 megabyte of memory andSDRAM 66 provides 8 megabytes of memory. Since the application code fromflash memory 64 is loaded onSDRAM 66 upon start up, SDRAM is left with approximately 7 megabytes of memory for video frame storage. - As shown in FIG. 2, a video data bus and12C bus connects
processor 60 with video decoder 58, a 12C bus connectsprocessor 60 withEEPROM 62, a XIO bus connectsprocessor 60 withflash memory 64, a SDRAM bus connectsprocessor 60 withSDRAM 66 and a XIO bus connectsprocessor 60 withUART 68.UART 68 is used for serial communications and general purpose input/output.UART 68 has a 16 character FIFO buffer, a 6 bit input port and an 8-bit output port that is used to drivestatus LED 70,error LED 72 andoutput relay 74 through the use of small signal transistors.Relay line 49 communicates the status of double pole,single throw relay 74 to headend unit 32. A RS-232level convertor 76 provides communication betweenUART 68 and RS-232serial line 48. The characteristics of RS-232line 48 and the communications conveyed thereby in the illustrated embodiment are a 3 wire connection, 19200 baud, 8 data bits, no parity, 1 stop bit and no handshaking. - In the illustrated embodiment, the only commands conveyed to tracking
unit 50 which are input by a human operator are on/off commands. Such on/off commands and other serial communications betweenhead unit 34 andtracking unit 50 are conveyed bybi-phase line 46 fromhead unit 34 tocamera 22 and to trackingunit 50 fromcamera 22 via RS-232line 48. In the illustrated embodiment, trackingunit 50 is provided with a sheet metal housing and mountedproximate camera 22. Alternative hardware architecture may also be employed with trackingunit 50. Such hardware should be capable of running the software described below and processing at least approximately 5 frames per second for best results. -
Tracking unit 50 performs several functions, it controls video decoder 58 and captures video frames acquired bycamera 22; it registers video frames taken at different times to remove the effects of camera motion; it performs a video content analysis to detect target objects which are in motion within the FOV ofcamera 22; it calculates the relative direction, speed and size of the detected target objects; it sends direction and speed commands tocamera 22; it performs all serial communications associated with the above functions; and it controls the operation of thestatus indicators relay 74. - The operation of
system 20 will now be described in greater detail. When trackingunit 50 is first activated the first step involves initializingcamera 22 andpositioning camera 22 to watching for a person or moving object to enter the FOV ofcamera 22 by taking repeated images as 24-bit YUV color images as either NTSC or PAL CIF resolution images. Alternatively,camera 22 may be moved through a predefined “tour” of the surveillance area after initialization and watch for a person or other moving object to enter the FOV ofcamera 22 ascamera 22 searches the surveillance area. For reference purposes, two images or frames acquired bycamera 22 for analysis will be labeled: - I1, I2
- In the exemplary embodiment,
camera 22 is continually acquiring new images and the computational analysis performed byprocessor 60 to compare the current image with a reference image takes longer than the time interval between the individual images acquired bycamera 22. Whenprocessor 60 completes its analysis, it will grab a new image for analysis. The time interval between two images which are consecutively grabbed byprocessor 60 is assumed to be constant byillustrated tracking unit 50. Although the time interval between two consecutively grabbed images may differ slightly, the variations are considered sufficiently small and the processing efficiencies achieved by this assumption to be sufficiently great to justify this assumption. As used herein unless otherwise indicated, the term consecutive images refers to images which are consecutively grabbed byprocessor 60 for analysis as opposed to images which are consecutively acquired bycamera 22. A QCIF resolution sub-sample (i.e., an image having a quarter of the resolution of the NTSC or PAL CIF resolution image) of the current I1 and I2 images is created. The sub-sample groups adjacent pixels together to define an average value for the grouped pixels. The purpose of the sub-sampling process is to reduce the time consumed by motion detection. A second sub-sample of the first sub-sample (resulting in images having {fraction (1/16)} the resolution of the original CIF resolution images) may also be taken to further increase the speed of the motion detection process. Such sub-sampling, however, reduces the resolution of the images and can potentially degrade the ability of system to detect the features and targets which are the subjects of interest. For reference purposes these sub-sampled images are labeled: - I1 1, I1 2, I2 1, I2 2
- If only a single sub-sample of each image is taken, these sub-samples are labeled:
- I1 1, I2 1
- Alternatively, these subsamples may be labeled1I1 and 1I2.
- Target Object Detection
- Initially, the camera may be stationary and monitoring a specific location for a moving target object.
System 20 looks for a moving target object by computing the image difference between the two most current images every time a new frame is grabbed byprocessor 60. The image difference is calculated by taking the absolute value of the difference between associated pixels of each image. When images I1 and I2 are aligned, either becausecamera 22 took each image with the same FOV or because one of the images was mapped to the second image, the image difference, Δ, is calculated in accordance with the following equation: - Δ=|I 2 −I 1|
- A histogram of these differences is then calculated. If there is a moving target in the two images, the histogram will usually have two peaks associated with it. The largest peak will typically be centered around zero and corresponds to the static regions of the image. The second major peak represents the pixels where changes in image intensity are high and corresponds to the moving areas within the image, i.e., a moving target object. The pixels associated with the second peak can be considered as outliers to the original Gaussian distribution. Since they will typically constitute less than 50% of the total number of pixels in the illustrated embodiment, they are detected using the estimation technique Least Median of Squares.
- An alternative method that may be used with the present invention and which provides for the manual identification of a target object for tracking purposes is discussed by Trajkovic et al. in U.S. Pat. App. Pub. 2002/0140813 A1 entitled Method For Selecting A Target In An Automated Video Tracking System which is hereby incorporated herein by reference. A method for detecting motion of target objects that may be used with the present invention is discussed by Trajkovic in U.S. Pat. App. Pub. 2002/0168091 A1 entitled Motion Detection Via Image Alignment which is hereby incorporated herein by reference.
- Identification of Point of Interest
- After detecting motion, a point of interest (POI) corresponding to the centroid of the moving target object is then identified. By calculating the convolution with Sobel operators of arbitrary order, the Sobel edge detection masks look for edges in both the horizontal and vertical directions and then combines this information into a single metric as is known in the art. More specifically, at each pixel both the Sobel X and Sobel Y operator is used to generate a gradient value for that pixel. They are labeled gx and gy respectively. The edge magnitude is then calculated by equation (1):
- EdgeMagnitude={square root}{square root over (gx 2 gy 2)} (1)
- The edge of the moving target object will have large edge magnitude values and these values are used to define the edges of the target object. The centroid of the target object or area of motion is found by using the median and sigma values of the areas of detected motion. The centroid, which is the point of interest or POI, is then found in both frames and its image position coordinates stored as (x(0), y(0), and x(1), y(1)).
- Three related coordinate systems may be used to describe the position of the POI, its real world coordinates (X, Y, Z) corresponding to coordinate
system 21 shown in FIG. 1, its image projection coordinates (x, y) and its camera coordinates (α, β, k) which correspond to the camera pan angle, camera tilt angle and the linear distance to the POI. The two positions of the POI captured by the two images allow for the determination of the 3-D position of the POI in both frames as well as the relative velocity of the POI during the time interval between the two frames. A simplified representation of the moving person or target object in the form of the 2-D location in the image is used in this determination process. -
Tracking unit 50 does not require the two images which are used to determine the motion of the POI to be taken with the camera having the same pan, tilt and focal length settings for each image. Instead, trackingunit 50 maps or aligns one of the images with the other image and then determines the relative velocity and direction of movement of the POI. Two alternative methods of determining the velocity and direction of the POI motion are described below. The first method described below involves the use of a rotation matrix R while the second method uses a homography matrix determined by matching and aligning common stationary features which are found in each of the two images being analyzed. - Rotation Matrix Method
-
- For an arbitrary point having image projection coordinates (x, y), the relation between the world coordinates, Pw, of an arbitrary point P and its camera coordinates, Pc, is given as:
- P w =RP c
-
-
- Assuming the target object to be a person of average height, the height can be considered a constant (i.e., Z(0)=Z=Constant) and equations (3a) and (3b) will represent a linear system with two unknowns (X(0), Y(0)) which is easily solved. The position of the POI in the second image, (X(1), Y(1)), can be computed in a similar manner, and the real world velocity of the target object in the x and y directions, X′ and Y′ respectively, can be found by:
- X′=X(1)−X(0) (3c)
- Y′=Y(1)−Y(0) (3d)
- Although the values for X′ and Y′ obtained in accordance with equations (3c) and (3d) are literally distances, the time interval between consecutive images grabbed by
processor 60 will be substantially constant as discussed above and, thus, the distance traveled by the target object during all such constant time intervals is directly proportional to the velocity of the target object and may be used as a proxy for the average velocity of the target object during the time interval between the acquisition of the two images. The sign of the velocity values is indicative of the direction of motion of the POI. In alternative embodiments, the actual velocity may be calculated and/or images acquired at more varied time intervals may be used. With this knowledge of the velocity and direction of motion of the POI, the pan and tilt velocity ofcamera 22 can be controlled to keep the target object centered within the FOV ofcamera 22. - In one embodiment, camera control also includes adjusting the focal length based upon the calculated distance between
camera 22 and the centroid of the target object, i.e., the POI. The destination focal length is assumed to be proportional to the distance between the POI and the camera, this distance, i.e., D(k), is found by the following equation: - D(k)=∥P w(k)∥={square root}{square root over (X(k)2 +Y(k)2 +Z 2)}
- wherein:
- Pw(k) represents the three dimensional location of the point in the world coordinate system;
- X(k) is the distance of the POI from the focal point of the camera in the X direction in the real world;
- Y(k) is the distance of the POI from the focal point of the camera in the Y direction in the real world; and
- Z is the current focal length of the camera, i.e., the distance between the camera and the focal plane defined by the current zoom setting.
- It is desired to keep this distance expressed as focal length units by use of the following:
- D(k)=cf(k)
- wherein:
- f(k) is the focal length of the camera at time step k; and
- c is a constant.
-
-
- wherein:
- Xc, Yc and Zc are the current real world coordinates of the POI; and
- xcn and ycn are the horizontal and vertical distances respectively of between the center of the image and the current image coordinates of the POI.
-
- wherein xd and yd are the destination image coordinates of the POI.
-
- wherein xdn and Ydn are the respective horizontal and vertical distances separating the two points (x0, y0) from (xd, yd).
-
- After expansion, this equation may be written as:
- x cn cos β+sin β=x dn(−x cn cos α sin β+ycn sin α+cos αcos β)
- x cn sin αsin β+ycn cos α−sin αcos β=y dn(−x cn cos α sin β+ycn sin α+cos αcos β)
- wherein xcn and ycn are the camera coordinate equivalents of xdn and ydn. The angles of rotation can then be found by iteratively solving this equation. The angles determined by this process represent the movement of the target object between the two consecutive images, I1 and I2, previously analyzed As discussed above, the time interval between two such consecutive images is a substantially constant value and thus the angles determined by this process are target values which are a function of the velocity of the target object in the time interval between the acquisition of the two images. The determined angles are also a function of the original location of the target object relative to the camera, the acceleration of the object and the previous orientation of the camera. Homography Matrix Method
- An alternative method of determining a target value which may be used in the control of
camera 22 to track the target object and which is representative of a property of the target object involves detecting corners in images I1 and I2. Corners are image points that have an intensity which significantly differs from neighboring points. Various methods of identifying and matching such corners from two images are known in the art. - One such known corner detection method is the MIC (minimum intensity change) corner detection method. The MIC corner detection method uses a corner response function (CRF) that gives a numerical value for the corner strength at a given pixel location. The CRF is computed over the image and corners are detected as points where the CRF achieves a local maximum. The CRF is computed using the following equation:
- R=min (r A ,r B)
- wherein:
- R is the CRF value;
- rA is the horizontal intensity variation; and
- rB is the vertical intensity variation.
- The MIC method uses a three step process wherein the first step involves computing the CRF for each pixel in a low resolution image. Pixels having a CRF above a first threshold T1 are identified as potential corners. This initial step will efficiently rule out a significant area of the image as non-corners because the low resolution of the image limits the number of pixels which require the computation of the CRF. The second step involves computing the CRF for the potential corner pixels using the full resolution image. If the resulting CRF is below a second threshold, T2, the pixel is not a corner. For pixels which have a CRF which satisfies the second threshold, T2, another interpixel approximation for determining an intensity variation for the pixel may also be computed and compared to a threshold value, e.g., T2. If the response is below the threshold value, the pixel is not a corner. The third step involves locating pixels having locally maximal CRF values and labeling them corners. Nearby pixels having relatively high CFR values but which are not the local maximal value will not be labeled corners. Lists, PCL1 and PCL2, of the detected corners for images I1 and I2 respectively are then compiled and compared. The corners in the two images are compared/matched using a similarity measure such as a normalised cross-correlation (NCC) coefficient as is known in the art.
- When
camera 22 is adjusted between the acquisition of the two images I1 and I2, it is necessary, to detect the target object in the most recently acquired image, to align the images so that the background remains constant and that only objects displaying motion relative to the background are detected. The adjustment ofcamera 22 may take the form of panning movement, tilting movement or adjustment of the focal length ofcamera 22. Geometric transforms may be used to modify the position of each pixel within the image. Another way to think of this is as the moving of all pixels from one location to a new location based upon the camera motion. One such method for transforming a first image to align it with a second image wherein the camera was adjusted between the acquisition of the two images is discussed by Trajkovic in U.S. Pat. App. Pub. No. 2002/0167537 A1 entitled Motion-Based Tracking With Pan-Tilt-Zoom Camera which is hereby incorporated herein by reference. - Alignment of consecutive images requires translation, scaling and rotation of one image to align it with the previous image(s). Of these three operations translation is the simplest. Warping, a process in which each pixel is subjected to a general user-specified transformation, may be necessary to reduce, expand, or modify an image to a standard size before further processing can be performed. Images produced by such geometric operations are approximations of the original. The mapping between the two images, the current I1 and a reference I2 images is defined by:
- p′=sQRQ − p=Mp (6)
- where p and p′ denote the homographic image coordinates of the same world point in the first and second images, s denotes the scale image (which corresponds to the focal length of the camera), Q is the internal camera calibration matrix, and R is the rotation matrix between the two camera locations.
-
- Where └mij┘3×3 is the homography matrix M that maps (aligns) the first image to the second image.
- The main problem of image alignment, therefore, is to determine the matrix M. From equation (6), it is clear that given s, Q and R it is theoretically straightforward to determine matrix M. In practice, however, the exact values of s, Q, and R are generally not known. Equation (6) assumes that the camera center and the center of rotation are identical, which is typically only approximately true. Additionally, in order to retrieve precise values of camera settings, i.e., pan and tilt values for determining R and zoom values for determining s, the camera must stop which will create unnatural motion and, depending on the system retrieving the camera settings, may take a considerable length of time.
- The exemplary embodiment of the present invention computes the alignment matrix M directly from the images using equations (7a) and (7b) to avoid the necessity of acquiring information on the camera position and calibration. The point matches between the two images is performed by first taking a QCIF sub-sample of the two images I1 and I2 to obtain:
- I1 1, I2 1
- It is also possible to take a further QCIF sub-sample of the sub-sampled images to provide the following set of lower resolution images:
- I1 1, I1 2, I2 1, I2 2
- The corners are then found in the low resolution images using the MIC corner detector described above. The homography matrix is then computed based upon a plurality of corresponding coordinates (x,y) and (x′, y′) in the low resolution image. Corner matching is then performed on the higher resolution image by finding the best corners around positions predicted by the homography matrix calculated using the low resolution images. A robust method such as the RANSAC algorithm which is known in the art may be used with the higher resolution images to identify “outlier” corner points which likely correspond to moving objects within the image. The “outlier” corner points identified by the RANSAC algorithm are not used in the calculation of the homography matrix using the higher resolution images to avoid the bias which would be introduced by using moving points in the calculation of the homography matrix. After removing the “outlier” corners using the RANSAC algorithm, the higher resolution images are used to the calculate the homography matrix M.
- The translation, rotation, and scaling of one image to align it with the second image can then be performed. A translation is a pixel motion in the x or y direction by some number of pixels. Positive translations are in the direction of increasing row or column index: negative ones are the opposite. A translation in the positive direction adds rows or columns to the top or left to the image until the required increase has been achieved. Image rotation is performed relative to an origin, defined to be at the center of the motion and specified as an angle. Scaling an image means making it bigger or smaller by a specified factor. The following approximation of equations (7a) and (7b) are used to represent such translation, rotation and scaling:
- x′=s(x cos α−y sin α)+t x
- y′=s(y sin α+x cos α)+t y (8)
- wherein
- s is the scaling (zooming) factor.
- α is the angle of rotation about the origin;
- tx is the translation in the x direction; and
- ty is the translation in the y direction.
- By introducing new independent variables a1=s cos α and a2s sin α, equation (8) becomes:
- x′=a 1 x−a 2 y+t x
- y′=a 2 x+a 1 y+t y
- After determining a1, a2, tx and ty, the two images, I1 and I2, can be aligned and the determination of the velocity and direction of the target object motion can be completed.
- To create smooth
camera motion camera 22 is controlled in a manner which allowscamera 22 to be constantly in motion. If the POI is to the left of the center of the field ofview processor 60 communicates a command tocamera 22 which instructscamera 22 to pan left at a particular panning velocity or rate of adjustment. The panning velocity is determined by the distance the POI is from the center of the image. There is a linear relationship between the selected panning velocity and the distance between the center of the most recently acquired image and the POI in the horizontal or x direction. Similarly, the tilting rate and direction ofcamera 22 is determined by the vertical distance, i.e., in the y direction, between the POI and the center of the most recently acquired image. Proportionality factors are also applied to account for distance of the target object from the camera. - The distance of the target object from the camera also influences the desired panning velocity. For a target object moving at a given speed in the x direction, the panning angle will have to be adjusted at a slower rate to track the object the more distant the object is from the camera. The distance of the target object from the camera also impacts the desired value of the camera tilt and focal length. Assuming a common height for all target objects and that the target object are moving on a planar surface which is parallel to the panning plane, the tilt angle which places the target object in the center of the image will be determined by the distance of that object from the camera, similarly, to maintain the target object at a given image height and assuming all target objects are the same height, the desired focal length of the camera will be determined by the distance of the target object from the camera.
- In the exemplary embodiment, the panning and tilting velocity of
camera 22 are determined by the following equations: - X vel(x delta x high)*sin(tilt angle)
- Y vel=(y delta /y high)*sin(tilt angle)
- wherein:
- Xvel is the velocity or rate at which the panning angle is adjusted;
- Yvel is the velocity or rate at which the tilting angle is adjusted;
- xdelta is the distance between the POI and the center of the image in the x direction;
- ydelta is the distance between the POI and the center of the image in the y direction;
- xhigh and yhigh are normalization factors; and sin(tilt angle) is the sine of the camera tilt angle (measured with reference to a horizontal plane) and provides a proportionality factor which is used to account for the target object distance from the camera. The resulting values Xvel and Yvel are computed using the distance of the POI from the center of the image and the distance of the target object from the camera and, as described above, the distance of the POI from the center of the image is related to the movement of the target object over a constant time value, thus values Xvel and Yvel are a function of several properties of the target object, its position relative to the camera in the real world and the position of the target object centroid within the FOV which is a function of the velocity and acceleration of the target object and thus, values Xvel and Yvel are also functions of the velocity and acceleration of the target object.
- A proportionality factor which is a function of the distance of the target object from the camera is used to adjust the selected panning and tilting adjustment rates because this distance impacts the effects of the panning and tilting adjustment of the camera. With regard to the panning motion of the camera, for example, when the target object is distant from the camera only minimal panning movement will be required to track movement of the target object in the x direction and maintain the target in the center of the image. If the target object is closer to the camera, the camera will be required to pan more quickly to track the target object if it were to move at the same speed in the x direction. Similarly, a higher rate of tilting is required to track targets which are closer to the camera than those which are more distant when such targets are moving at the same speed.
- Additionally, the focal length adjustment rate and direction, i.e., how quickly to zoom
camera 22 and whether to zoom in or out, is determined using the distance of the target object from the camera. The process described above for aligning two images having different scales, i.e., acquired at different focal lengths, allows forsystem 20 to utilize dynamic zooming, i.e., adjusting the focal length ofcamera 22 during the tracking of the target object instead of requiring the camera to maintain a constant zoom or focal length value during tracking or for acquiring compared images. In the exemplary embodiment, the largest detected moving object is selected as the target object provided that the size of the target object is larger than a predetermined threshold value, e.g., 10% of the field of view. Once tracking of the target object begins, the focal length ofcamera 22 is adjusted in a manner which attempts to maintain the target object between 10%-70% of the FOV. Tracking of the target may stop if the size of the object falls outside of this range. The focal length ofcamera 22 is adjusted to account for the distance of the target object from the camera with the goal of keeping the target object size relatively constant, e.g., 20% of the FOV, and which facilitates the observation of the target object. - More specifically, the desired focal length is determined by first estimating the target distance between the target object and the camera as follows:
- Target Distance=Camera Height/Sin(tilt angle)
- wherein the tilt angle is determined with reference to a horizontal plane.
Camera 22 is mounted at a known height and this height is input into trackingunit 50 during installation ofsystem 20. Next, the resolution-limited FOV width (R−L FOV width) is calculated: - R−L FOV width=Number of effective pixels/Number of lines of resolution required to identify an intruder
- wherein:
- Number of effective pixels is768(H) for NTSC video images and 752(H) for PAL video images; and
- Number of lines of resolution to identify an intruder is in lines of resolution per foot, in the exemplary embodiment, e.g., 16 lines per foot.
- Then a desired focal length is calculated which will provide a sufficient number of lines of resolution to continue tracking of the target object is calculated:
- Desired Focal Length=Format*Target Distance (ft)/R−L FOV width
- wherein:
- Format is the horizontal width in mm of the CCD (charge-coupled device) used by the camera, e.g., 3.6 mm for
camera 22. In the illustrated embodiment,camera 22 is instructed to adjust its focal length setting by changing the focal length to the desired focal length value. The focal length adjustment ofcamera 22 is thus a point-to-point adjustment of the focal length. It would be possible in an alternative embodiment, however, forcamera 22 to be commanded to move at a selected adjustment rate which is selected based upon the difference between the current focal length and the desired focal length similar to the manner in which the pan and tilt adjustments are made rather than to simply move to a given zoom setting.Camera 22 would then continue to the adjust the focal length at the specified rate (and in the chosen direction, i.e., increasing or decreasing the focal length of the camera) untilprocessor 60 communicated a second command altering the rate or direction of focal length adjustment. Such a second command could be to change the rate of change to 0 which would correspond to a constant focal length value. - In summary, the video content analysis algorithm performs the following functions:
- Tracker Initialization: The tracker is initialized to position the camera and wait for a moving target object to enter the camera FOV.
- Background Subtraction: Images are compared to subtract the background and detect moving target objects.
- Corner Detection and Matching: Corner features in the background are identified and matched to estimate changes in camera position between acquisition of the images.
- Warping: Images are geometrically distorted to align images taken with differing fields of view and detect the moving target object in such images.
- Region Location and Extraction: Locating the target object in each new frame involves locating and extracting the image region corresponding to the target object.
- Point of Interest (POI) Computation: A simplified representation of the target object and its centroid is located within the two dimensional framework of the image.
- Calculate adjustment rates for PTZ camera: Determine pan, tilt and focal length adjustment rates for camera and communicate commands to the camera.
- FIG. 3 provides a flow chart which graphically illustrates the general logic of the video content analysis algorithm used by
system 20 as described above and which uses the homography matrix approach instead of the rotation matrix approach to identify and track the target object. As shown in FIG. 3, after turningtracking unit 50 on, it is initialized atstep 80 by loading a bootloader program fromEEPROM 62 and copying the application code fromflash memory 64 toSDRAM 66 for execution.Block 82 represents the remaining memory ofSDRAM 66 which is available as a ring buffer for storage of video image frames for processing byprocessor 60. Atdecision block 84processor 60 determines if the first flag is true. The first flag is true only when no images fromcamera 22 have been loaded toSDRAM 66 for analysis byprocessor 60. Thus, when trackingunit 50 is turned on, the firsttime decision block 84 is encountered, the first flag will be true andprocessor 60 will proceed to block 86.Block 86 represents the grabbing of two images byprocessor 60.Processor 60 then proceeds to block 88 where the current tilt value ofcamera 22 for each of the two images are obtained from the integral controller ofcamera 22 for later use to calculate the destination focal length. - Next, block90 represents the taking of subsamples of the two most recently grabbed images. At
block 92, the image difference of the two subsampled images is calculated to determine if any moving objects are present in the images. (If a moving object is found then the intruder tracking functionality ofunit 50 is engaged, i.e., ITE Triggering.) If a moving object is present in the images, the centroid of the moving target object is located atblock 94. A corner detection method is then used to detect corner features in the subsampled images and generate lists of such corners atblock 96. Next, atblock 98, the data for images I1 and I2 are swapped. The swapping of image data is done so that when a new image is grabbed and placed in the buffer after completing the calculations called for in steps 100-104 the new image and data associated therewith will overwrite the image and data associated with the older of the two images already present in the buffer. Atblock 100 the POI is calculated using the highest resolution images if the POI was determined using subsample images atblock 94. The destination or desired focal length is then calculated atblock 102. The pan and tilt velocity, Xvel and Yvel are calculated atblock 104. Next, atblock 106,processor 60 communicates a command tocamera 22 to adjust the focal length to the desired focal length; to pan at an adjustment rate and direction corresponding to the magnitude and sign of Xvel; and to tilt at an adjustment rate and direction corresponding to the magnitude and sign of Yvel - The process then returns to block84 where the first flag will no longer be true and the process will proceed to block 108 where a single new image will be grabbed and overwrite image I2 in the buffer. The tilt value of
camera 22 for new image I2 is then obtained atblock 110 from the integral controller ofcamera 22 for later calculation of the desired focal length. The new image is then subsampled atblock 112 and corners are detected and a list of such corners created for the subsampled images at block 114. The warping and alignment process described above is then performed at block 116 to align images I1 and I2. Atblock 118, the image difference of the two aligned images is then calculated to determine if a moving object is included in the images. If a moving target object is present in the images, the centroid of the target object is determined at block 120. Atblock 122 images I1 and I2 and the data associated therewith are swapped as described above with respect to block 98. Atblock 124 the size of the detected target object, i.e., the Blob_Size, is compared to a threshold value and, if the target object is not large enough, or if no target object has been found in the images, the process returns to block 84. If the target object is larger than the threshold size, the process continues on to block 100 through 106 where the adjustment parameters ofcamera 22 are determined and then communicated tocamera 22 as described above. - In the illustrated embodiment,
camera 22 may pan and tilt at different specified velocities, i.e., at selectively variable adjustment rates, and whenprocessor 60 communicates a command tocamera 22,processor 60 instructscamera 22 to pan in a selected direction and at a selected rate, to tilt in a selected direction and at a selected rate, and to change the focal length to a desired focal length. After receiving this first command,camera 22 will adjust by moving to the specified focal length and panning and tilting in the specified directions and at the specified rates untilcamera 22 receives a second command instructing it to pan in a new selected direction and at a new selected rate, to tilt in a new selected direction and at a new selected rate, and to change the focal length to a new desired focal length. The panning and tilting ofcamera 22 may also cease prior to receiving the second command ifcamera 22 has a limited panning or tilting range and reaches the limit of its panning or tilting range. By instructingcamera 22 to pan and tilt in selected directions and at selected rates instead of instructingcamera 22 to move to new pan and tilt orientations and then stop,camera 22 may be continuously adjusted during the tracking of the target object without stationary intervals separating the receipt and execution of the adjustment commands and thereby provide a stream of video images with relatively smooth transitional movements. - Thus, during operation of
system 20,processor 60 may consecutively analyze a series of images which may all record different FOVs. Asprocessor 60 analyzes images and repeatedly adjustscamera 22 to track the target object, the series of images may include three images consecutively analyzed byprocessor 60, i.e., first, second and third images, wherein each image records a different FOV.Processor 60 will have communicated a previous command tocamera 22 based upon earlier images andcamera 22 will be adjusted in accordance with this first command as it analyzes the first and second images, the analysis of the first and second images will result in a second command tocamera 22 andcamera 22 will be adjusted in accordance with this second command as it analyzes the second and third images to formulate the next adjustment command forcamera 22. As described above,camera 22 will continue to pan and tilt in accordance with the first command until receipt of the second command. In this manner,camera 22 may be continuously adjusted as it acquires a series of images having different fields of views without requiring stationary intervals for the acquisition of images having common FOVs or separating the execution of adjustment commands. - The video content analysis algorithm described above assumes that
camera 22 is mounted at a known height and works best when the surveillance area and target objects conform to several characteristics. For best results, the target should be 30% to 70% of the image height, have a height to width ratio of no more than 5:1 and move less than 25% of the image width between processed frames at a constant velocity.System 20 tracks only one moving target at a time. If multiple targets are within the FOV,system 20 will select the largest target if it is 20% larger than next largest target. If the largest target is not at least 20% larger than next largest target,system 20 may change targets randomly. Alternative target object identification methods may also be used to distinguish between moving objects, such as those analyzing the color histogram of the target object. It is best if the area of interest is within 1 standard deviation of the mean intensity of the surrounding environment. Best results are also obtained when the plane of the target motion is parallel to the panning plane.System 20 uses background features to detect “corners” and register subsequent images, therefore it may fail in excessively featureless environments or if targets occupy a majority of the FOV and obscure such corner features. Divergence from these assumptions and characteristics is not necessarily fatal to the operation ofsystem 20 and may merely degrade performance ofsystem 20. These assumptions concerning the illustrated embodiment cover a large subset of video surveillance applications related to restricted areas where people are not supposed to be present. It is also possible for those having ordinary skill in the art to adapt illustratedsystem 20 to cover additional situations which are not necessarily limited to these assumptions and characteristics. - As shown in FIG. 4, tracking
unit 50 has three main states: 1) Tracker OFF, 2) Looking for Target and 3) Tracking Target.Tracking unit 50 is turned on and off by a human operator inputting commands through an input device such askeyboard 34 orjoystick 36. The on/off commands are routed throughbi-phase cable 46 tocamera 22 and RS-232 line to trackingunit 50.Tracking unit 50 communicates its current status withLED indicators relay 74. For example,LED 70 emits light whenunit 50 is on and flashes whenunit 50 is tracking a target object. Whenunit 50 is tracking a target object,relay 74 communicates this information to headend unit 34 viarelay line 49.LED 72 emits light whenunit 50 is turned on but has experienced an error such as the loss of the video signal. - In the exemplary embodiment, if tracking
unit 50 is on, either looking for a target or tracking a target, and a higher priority activity is initiated, trackingunit 50 will turn off or become inactive and after the higher priority activity has ceased and a dwell time has elapsed, i.e., the higher priority activity has timed out, trackingunit 50 will turn back on and begin looking for a target.(PRIORITY TRACKING UNIT ACTIVITY RANKING) ACTION Joy Stick Movement (1) Tracker changes to OFF status Camera Initiated Movement (2) Tracker changes to OFF status Timing Out of Camera (3) Tracker changes to Initiated Movement Looking for Target status Timing Out of Joystick (3) Tracker changes to Movement Looking for Target status On Command from Head End (4) Tracker changes to Unit Looking for Target status Off Command from Head End (4) Tracker changes to OFF Unit status - In alternative embodiments, the tracking unit may give up control of
camera 22 during human operator and/or camera initiated movement of camera and continue to analyze the images acquired bycamera 22 to detect target objects. The continued detection of target objects while the camera is under the control of an operator or separate controller is possible because thetracking unit 50 does not require the images used to detect the target object to be acquired while the camera is stationary or for the images to each have the same field of view. - Once tracking
unit 50 has detected a target object, it will continuously track the target object until it can no longer locate the target object, for example, the target object may leave the area which is viewable bycamera 22 or may be temporarily obscured by other objects in the FOV. Whenunit 50 first loses the target object it will enter into a reacquisition subroutine. If the target object is reacquired, tracking unit will continue tracking the target object, if the target has not been found before the completion of the reacquisition subroutine, trackingunit 50 will change its status to Looking for Target and control of the camera position will be returned to either the camera controller or the human operator. The reacquisition subroutine is graphically illustrated by the flow chart of FIG. 5. In the reacquire mode, trackingunit 50 first keeps the camera at the last position in which the target was tracked for approximately 10 seconds. If the target is not reacquired, the camera is zoomed out in discrete increments wherein the maximum zoom in capability of the camera corresponds to 100% and no zoom (i.e., no magnifying effect) corresponds to 0%. More specifically, the camera is zoomed out to the next lowest increment of 20% and looks for the target for approximately 10 seconds in this new FOV. The camera continues to zoom out in 20% increments at 10 second intervals until the target is reacquired or the camera reaches its minimum zoom (0%) setting. After 10 seconds at the minimum zoom setting, if the target has not been reacquired, the status of trackingunit 50 is changed to “Looking for Target”, the position ofcamera 22 returns to a predefine position or “tour” and the positional control of the camera is returned to the operator or the controller embedded withincamera 22. - As described above,
system 20 uses a general purpose video processing platform that obtains video and camera control information from a standard PTZ camera. This configuration and use of a standard PTZ camera also allows for the retrofitting and upgrading of existing installations having installed PTZ cameras by the installingtracking units 50 andcoupling tracking units 50 with the existing PTZ cameras. A system which could be upgraded by the addition of one ormore tracking units 50 is discussed by Sergeant et al. in U.S. Pat. No. 5,517,236 which is hereby incorporated herein by reference. By providingtracking units 50 with a sheet metal housing their mounting on or near a PTZ camera to provide for PTZ control using image processing of the source video is facilitated.System 20 thereby provides a stand alone embedded platform which does not require a personal computer-based tracking system. - The present invention can be used in many environments where it is desirable to have video surveillance capabilities. For example,
system 20 may be used to monitor manufacturing and warehouse facilities and track individuals who enter restricted areas.Head end unit 32 withdisplay 38 andinput devices camera 22 such as a guard room at another location in the building. Althoughsystem 20 includes a method for automatically detecting a target object, the manual selection of a target object by a human operator, such as by the operation ofjoystick 36, could also be employed with the present invention. After manual selection of the target object,system 20 would track the target object as described above for target objects identified automatically. - While this invention has been described as having an exemplary design, the present invention may be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles.
Claims (65)
1. A video tracking system comprising:
a video camera having a field of view, said camera being selectively adjustable wherein adjustment of said camera varies the field of view of said camera; and
at least one processor operably coupled to said camera wherein said processor receives video images acquired by said camera and selectively adjusts said camera; said processor programmed to detect a moving target object in said video images and adjust said camera to track said target object, said processor adjusting said camera at a plurality of varied adjustment rates.
2. The video tracking system of claim 1 wherein said processor selects the adjustment rate of said camera as a function of at least one property of the target object.
3. The video tracking system of claim 2 wherein the at least one property of the target object includes a velocity of the target object.
4. The tracking system of claim 2 wherein said processor is programmed to select the adjustment rate of said camera based upon analysis of a first image and a second image wherein said first image is acquired by said camera adjusted to define a first field of view and said second image is acquired by said camera adjusted to define a second field of view.
5. The tracking system of claim 4 wherein said first and second fields of view are partially overlapping and wherein determination of said selected adjustment rate by said processor includes identifying and aligning at least one common feature represented in each of said first and second images.
6. The tracking system of claim 1 wherein said camera has a selectively adjustable focal length and said processor selects the focal length of said camera as a function of the distance of the target object from said camera.
7. The tracking system of claim 1 wherein said camera is adjusted at a first selected adjustment rate until said processor selects a second adjustment rate and communicates said second adjustment rate to said camera.
8. The tracking system of claim 4 wherein said camera defines a third field of view as said camera is being adjusted at said selected adjustment rate and wherein a third image is acquired by said camera when defining said third field of view, said first, second and third images being consecutively analyzed by said processor.
9. The tracking system of claim 1 wherein said camera is selectively adjustable at a variable rate in adjusting at least one of a panning orientation of said camera and a tilt orientation of said camera.
10. The tracking system of claim 1 wherein selective adjustment of said camera includes selective panning movement of said camera, said panning movement defining an x-axis, selective tilting movement of said camera, said tilting movement defining a y-axis, and selective focal length adjustment of said camera, adjustment of the focal length defining a z-axis, said x, y and z axes oriented mutually perpendicular.
11. The tracking system of claim 10 wherein said processor adjusts said camera at a selected panning rate, said selected panning rate being a function of the velocity of said target object along said x-axis and said processor adjusts said camera at a selected tilting rate, said selected tilting rate being a function of the velocity of said target object along said y-axis.
12. The tracking system of claim 1 further comprising a display device and an input device operably coupled to said system wherein an operator may view said video images on said display device and input commands or data into said system through said input device, said display device and input device being positionable remote from said camera.
13. A video tracking system comprising:
a video camera having a field of view, said camera being selectively adjustable wherein adjustment of said camera varies the field of view of said camera; and
at least one processor operably coupled to said camera wherein said processor receives video images acquired by said camera and selectively adjusts said camera; said processor programmed to detect a moving target object in said video images and estimate a target value, said target value being a function of a property of said target object, said processor adjusting said camera at a selected adjustment rate, said selected adjustment rate being a function of said target value.
14. The video tracking system of claim 13 wherein said camera is selectively adjustable at a variable rate in adjusting at least one of a panning orientation of said camera and a tilt orientation of said camera.
15. The tracking system of claim 13 wherein selective adjustment of said camera includes selective panning movement of said camera, said panning movement defining an x-axis, selective tilting movement of said camera, said tilting movement defining a y-axis, and selective focal length adjustment of said camera, adjustment of the focal length defining a z-axis, said x, y and z axes oriented mutually perpendicular.
16. The tracking system of claim 15 wherein said processor adjusts said camera at a selected panning rate, said selected panning rate being a function of the velocity of said target object along said x-axis and said processor adjusts said camera at a selected tilting rate, said selected tilting rate being a function of the velocity of said target object along said y-axis.
17. The tracking system of claim 13 wherein said processor is programmed to estimate said target value based upon a first image and a second image wherein said first image is acquired by said camera adjusted to define a first field of view and said second image is acquired by said camera adjusted to define a second field of view.
18. The tracking system of claim 17 wherein said first and second fields of view are partially overlapping and wherein determination of said selected adjustment rate by said processor includes identifying and aligning at least one common feature represented in each of said first and second images.
19. The tracking system of claim 17 wherein said camera is adjusted at a first selected adjustment rate until said processor selects a second adjustment rate and communicates said second adjustement rate to said camera.
20. The tracking system of claim 19 wherein said camera defines a third field of view as said camera is adjusted at said selected adjustment rate and wherein a third image is acquired by said camera when defining said third field of view, said first, second and third images being consecutively analyzed by said processor.
21. The tracking system of claim 13 wherein said camera has a selectively adjustable focal length and said processor selects the focal length of said camera as a function of the distance of the target object from said camera.
22. The tracking system of claim 13 further comprising a display device and an input device operably coupled to said system wherein an operator may view said video images on said display device and input commands or data into said system through said input device, said display device and input device being positionable remote from said camera.
23. A video tracking system comprising:
a video camera having a field of view, said camera being selectively adjustable wherein adjustment of said camera varies the field of view of said camera; and
at least one processor operably coupled to said camera wherein said processor receives video images acquired by said camera and selectively adjusts said camera; said processor programmed to detect a moving target object in said video images and adjust said camera and track said target object and wherein during tracking of the target object said processor communicates a plurality of commands to said camera, said camera being continuously and variably adjustable in accordance with said commands without an intervening stationary interval.
24. The video tracking system of claim 23 wherein said camera is selectively adjustable at a variable rate in adjusting at least one of a panning orientation of said camera and a tilt orientation of said camera.
25. The tracking system of claim 23 wherein said commands includes a first command adjusting said camera at a selected rate and direction until a second command is received by said camera.
26. The tracking system of claim 25 wherein said processor adjusts said camera at a selectively variable panning rate and at a selectively variable tilting rate.
27. The tracking system of claim 23 wherein said camera acquires images for analysis by said processor while being adjusted.
28. The tracking system of claim 23 wherein continuous and variable adjustment of said camera includes varying one of a direction of adjustment and a rate of adjustment.
29. A video tracking system comprising:
a video camera having a field of view, said camera being selectively adjustable wherein adjustment of said camera varies the field of view of said camera; and
at least one processor operably coupled to said camera wherein said processor receives video images acquired by said camera and selectively adjusts said camera; said processor programmed to detect a moving target object in said video images and adjust said camera and track said target object wherein said processor consecutively analyzes first, second and third images acquired by said camera, each of said images recording a different field of view, said processor communicating to said camera a first command selectively adjusting said camera and a second command selectively adjusting said camera; said camera being adjusted in accordance with said first command during at least a portion of a first time interval between acquisition of said first and second images, said camera being adjusted in accordance with said second command during at least a portion of a second time interval between acquisition of said second and third images and wherein said camera is continuously adjusted between acquisition of said first image and said third image.
30. The video tracking system of claim 29 wherein said camera is selectively adjustable at a variable rate in adjusting at least one of a panning orientation of said camera and a tilt orientation of said camera.
31. The tracking system of claim 29 wherein said first command adjusts said camera at a selected rate and direction until said second command is received by said camera.
32. A method of tracking a target object with a video camera, said method comprising:
providing a video camera having a field of view, said camera being selectively adjustable wherein adjustment of said camera varies the field of view of said camera; and
adjusting said camera at a selectively variable adjustment rate to track a target object.
33. The method of claim 32 wherein said camera is adjusted at an adjustment rate which is selected as a function of at least one property of the target object.
34. The method of claim 33 wherein the at least one property of the target object includes a velocity of the target object.
35. The method of claim 33 wherein said adjustment rate is selected based upon analysis of a first image and a second image wherein said first image is acquired by said camera adjusted to define a first field of view and said second image is acquired by said camera adjusted to define a second field of view.
36. The method of claim 35 wherein said first and second fields of view are partially overlapping and wherein determination of said adjustment rate includes identifying and aligning at least one common feature represented in each of said first and second images.
37. The method of claim 35 wherein determination of said adjustment rate includes the use of a proportionality factor which is a function of the real world distance between the target object and said camera.
38. The method of claim 32 wherein said camera is adjusted at a first selected adjustment rate until said processor selects a second adjustment rate and communicates said second adjustment rate to said camera.
39. The method of claim 32 wherein adjusting said camera at a selectively variable adjustment rate comprises adjusting at least one of a panning orientation of said camera and a tilt orientation of said camera.
40. The method of claim 32 wherein said camera is selectively adjustable at a variable rate in adjusting each of a panning orientation of said camera and a tilt orientation of said camera.
41. The method of claim 32 wherein said camera is selectively adjustable at a variable rate in adjusting each of a panning orientation of said camera and a tilt orientation of said camera, and wherein each of said variable adjustment rates are selected as a function of the velocity of the target object.
42. A method of tracking a target object with a video camera, said method comprising:
providing a video camera having a field of view, said camera being selectively adjustable wherein adjustment of said camera varies the field of view of said camera;
detecting a target object in images acquired by said camera;
estimating a target value which is a function of at least one property of the target object; and
adjusting said camera at a selectively variable rate wherein said adjustment rate of said camera rate is selected as a function of said target value.
43. The method of claim 42 wherein the at least one property of the target object includes a velocity of the target object.
44. The method of claim 42 wherein adjusting said camera at a selectively variable adjustment rate includes selecting said adjustment rate based upon analysis of a first image and a second image wherein said first image is acquired by said camera adjusted to define a first field of view and said second image is acquired by said camera adjusted to define a second field of view.
45. The method of claim 44 wherein said first and second fields of view are partially overlapping and wherein determination of said adjustment rate includes identifying and aligning at least one common feature represented in each of said first and second images.
46. The method of claim 42 wherein the camera has a selectively adjustable focal length and the method further comprises adjusting the focal of said camera as a function of the distance of the target object from the camera.
47. The method of claim 42 wherein adjusting the camera further comprises adjusting the camera at a first selected adjustment rate until a second selected adjustment rate is communicated to the camera.
48. The method of claim 42 wherein adjusting said camera at a selectively variable adjustment rate comprises adjusting at least one of a panning orientation of said camera and a tilt orientation of said camera.
49. The method of claim 42 wherein adjusting said camera at a selectively variable adjustment rate includes selectively adjusting at a variable rate each of a panning orientation of said camera and a tilt orientation of said camera.
50. The method of claim 42 wherein the step of adjusting said camera includes selecting a first adjustment rate and direction for adjusting the camera and continuing to adjust the camera at the first adjustment rate and direction until a second adjustment rate and direction are selected.
51. A method of tracking a target object with a video camera, said method comprising:
providing a video camera having a field of view, said camera being selectively adjustable wherein adjustment of said camera varies the field of view of said camera; and
adjusting said camera to track a target object wherein said adjustment of said camera includes selectively and variably adjusting at least one adjustment parameter and wherein said camera is continuously adjustable during said selective and variable adjustment of said at least one adjustment parameter.
52. The method of claim 51 wherein selectively and variably adjusting said at least one adjustment parameter of said camera includes the adjustment of at least one of a panning orientation of said camera and a tilt orientation of said camera.
53. The method of claim 51 wherein selectively and variably adjusting said at least one adjustment parameter of said camera includes adjusting said camera at a selectively variable rate in the adjustment of at least one of a panning orientation of said camera and a tilt orientation of said camera.
54. The method of claim 51 wherein selectively and variably adjusting said at least one adjustment parameter of said camera includes adjusting said camera at a selectively variable rate in the adjustment of each of a panning orientation of said camera and a tilt orientation of said camera.
55. The method of claim 54 wherein said selective and variable adjustment of said at least one adjustment parameter includes varying one of a direction of adjustment and a rate of adjustment.
56. The method of claim 54 wherein at least one of the adjustment parameters are adjusted at a variable adjustment rate selected as a function of the velocity of the target object.
57. A method of tracking a target object with a video camera, said method comprising:
providing a video camera having a field of view, said camera being selectively adjustable wherein adjustment of said camera varies the field of view of said camera;
detecting a target object in images acquired by said camera;
acquiring first, second and third images, each of said first, second and third images recording a different field of view;
communicating a first command to said camera selectively adjusting said camera;
communicating a second command to said camera selectively adjusting said camera; and
continuously adjusting said camera between acquisition of said first image and acquisition of said third image wherein said camera is adjusted in accordance with said first command during at least a portion of a first time interval between acquisition of said first image and acquisition of said second image and said camera is adjusted in accordance with said second command during at least a portion of a second time interval between acquisition of said second image and acquisition of said third image.
58. The method of claim 57 wherein said first and second commands selectively adjust at least one of a panning orientation of said camera, a tilt orientation of said camera, and a focal length of said camera.
59. The method of claim 57 wherein said first and second commands selectively adjust said camera at a selectively variable adjustment rate in the adjustment of at least one of a panning orientation of said camera and a tilt orientation of said camera.
60. The method of claim 57 wherein said first and second commands select a variable adjustment rate for each of a panning orientation of said camera and a tilt orientation of said camera.
61. The method of claim 60 wherein at least one of the variable adjustment rates are selected as a function of the velocity of the target object.
62. A video tracking system comprising:
a video camera having a selectively adjustable focal length; and
at least one processor operably coupled to said camera wherein said processor receives video images acquired by said camera and selectively adjusts the focal length of said camera; said processor programmed to detect a moving target object in said video images and adjust the focal length of said camera as a function of the distance of the target object from the camera.
63. The video tracking system of claim 62 wherein said camera has a selectively adjustable panning orientation and a selectively adjustable tilting orientation; said processor adjusting said panning orientation and said tilting orientation to maintain the target object centered in the video images and wherein said processor selectively adjusts the focal length of said camera as a function of the tilt angle.
64. A method of automatically tracking a target object with a video camera, said method comprising:
providing a video camera having a selectively adjustable focal length; and
adjusting the focal length of the camera as a function of the distance of the target object from the camera.
65. The method of claim 64 wherein the camera has a selectively adjustable panning orientation and a selectively adjustable tilting orientation and said method further includes adjusting the panning and tilting orientation of the camera to track the target object and selectively adjusting the focal length of the camera as a function of the tilt angle of camera.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/306,509 US20040100563A1 (en) | 2002-11-27 | 2002-11-27 | Video tracking system and method |
EP03026979.9A EP1427212B9 (en) | 2002-11-27 | 2003-11-26 | Video tracking system and method |
JP2003397008A JP4451122B2 (en) | 2002-11-27 | 2003-11-27 | Video tracking system and method |
JP2007109339A JP5242938B2 (en) | 2002-11-27 | 2007-04-18 | Video tracking system and method |
US13/249,536 US9876993B2 (en) | 2002-11-27 | 2011-09-30 | Video tracking system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/306,509 US20040100563A1 (en) | 2002-11-27 | 2002-11-27 | Video tracking system and method |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/249,536 Continuation US9876993B2 (en) | 2002-11-27 | 2011-09-30 | Video tracking system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20040100563A1 true US20040100563A1 (en) | 2004-05-27 |
Family
ID=32312196
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/306,509 Abandoned US20040100563A1 (en) | 2002-11-27 | 2002-11-27 | Video tracking system and method |
US13/249,536 Expired - Fee Related US9876993B2 (en) | 2002-11-27 | 2011-09-30 | Video tracking system and method |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/249,536 Expired - Fee Related US9876993B2 (en) | 2002-11-27 | 2011-09-30 | Video tracking system and method |
Country Status (3)
Country | Link |
---|---|
US (2) | US20040100563A1 (en) |
EP (1) | EP1427212B9 (en) |
JP (2) | JP4451122B2 (en) |
Cited By (92)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040233282A1 (en) * | 2003-05-22 | 2004-11-25 | Stavely Donald J. | Systems, apparatus, and methods for surveillance of an area |
US20040263646A1 (en) * | 2003-06-24 | 2004-12-30 | Microsoft Corporation | Whiteboard view camera |
US20050041139A1 (en) * | 2003-08-05 | 2005-02-24 | Lowles Robert J. | Method for automatic backlight adjustment |
US20060203098A1 (en) * | 2004-02-19 | 2006-09-14 | Henninger Paul E Iii | Method and apparatus for producing frame accurate position data in a PTZ dome camera with open loop control |
US20070035623A1 (en) * | 2005-07-22 | 2007-02-15 | Cernium Corporation | Directed attention digital video recordation |
US20070146484A1 (en) * | 2005-11-16 | 2007-06-28 | Joshua Horton | Automated video system for context-appropriate object tracking |
US20070154088A1 (en) * | 2005-09-16 | 2007-07-05 | King-Shy Goh | Robust Perceptual Color Identification |
US20070286456A1 (en) * | 2006-06-12 | 2007-12-13 | Honeywell International Inc. | Static camera tracking system |
US20080055413A1 (en) * | 2006-09-01 | 2008-03-06 | Canon Kabushiki Kaisha | Automatic-tracking camera apparatus |
US20080094480A1 (en) * | 2006-10-19 | 2008-04-24 | Robert Bosch Gmbh | Image processing system and method for improving repeatability |
US20080117326A1 (en) * | 2006-11-22 | 2008-05-22 | Canon Kabushiki Kaisha | Optical device, imaging device, control method for optical device, and program |
US20080122958A1 (en) * | 2006-11-29 | 2008-05-29 | Honeywell International Inc. | Method and system for automatically determining the camera field of view in a camera network |
US20080225127A1 (en) * | 2007-03-12 | 2008-09-18 | Samsung Electronics Co., Ltd. | Digital image stabilization method for correcting horizontal inclination distortion and vertical scaling distortion |
US20090154565A1 (en) * | 2007-12-12 | 2009-06-18 | Samsung Electronics Co., Ltd. | Video data compression method, medium, and system |
US20090195654A1 (en) * | 2008-02-06 | 2009-08-06 | Connell Ii Jonathan H | Virtual fence |
US20090207247A1 (en) * | 2008-02-15 | 2009-08-20 | Jeffrey Zampieron | Hybrid remote digital recording and acquisition system |
US20090251537A1 (en) * | 2008-04-02 | 2009-10-08 | David Keidar | Object content navigation |
WO2009122416A2 (en) * | 2008-04-02 | 2009-10-08 | Evt Technologies Ltd. | Object content navigation |
US20090251539A1 (en) * | 2008-04-04 | 2009-10-08 | Canon Kabushiki Kaisha | Monitoring device |
US20100008539A1 (en) * | 2007-05-07 | 2010-01-14 | Johnson Robert A | Systems and methods for improved target tracking for tactical imaging |
US7650058B1 (en) | 2001-11-08 | 2010-01-19 | Cernium Corporation | Object selective video recording |
US20100054525A1 (en) * | 2008-08-27 | 2010-03-04 | Leiguang Gong | System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans |
US20100194882A1 (en) * | 2009-01-30 | 2010-08-05 | Ajit Belsarkar | Method and apparatus for monitoring using a movable video device |
US20100322476A1 (en) * | 2007-12-13 | 2010-12-23 | Neeraj Krantiveer Kanhere | Vision based real time traffic monitoring |
US7956890B2 (en) | 2004-09-17 | 2011-06-07 | Proximex Corporation | Adaptive multi-modal integrated biometric identification detection and surveillance systems |
US20110149072A1 (en) * | 2009-12-22 | 2011-06-23 | Mccormack Kenneth | Surveillance system and method for operating same |
US20110175999A1 (en) * | 2010-01-15 | 2011-07-21 | Mccormack Kenneth | Video system and method for operating same |
US20110205355A1 (en) * | 2010-02-19 | 2011-08-25 | Panasonic Corporation | Data Mining Method and System For Estimating Relative 3D Velocity and Acceleration Projection Functions Based on 2D Motions |
US20120019664A1 (en) * | 2010-07-26 | 2012-01-26 | Canon Kabushiki Kaisha | Control apparatus for auto-tracking camera system and auto-tracking camera system equipped with same |
US20120069179A1 (en) * | 2010-09-17 | 2012-03-22 | Gish Kurt A | Apparatus and method for assessing visual acuity |
US20120120248A1 (en) * | 2010-11-16 | 2012-05-17 | Electronics And Telecommunications Research Institute | Image photographing device and security management device of object tracking system and object tracking method |
US20120121134A1 (en) * | 2009-07-29 | 2012-05-17 | Sony Corporation | Control apparatus, control method, and program |
US20120268608A1 (en) * | 2011-04-20 | 2012-10-25 | Canon Kabushiki Kaisha | Automatic tracking control apparatus for camera apparatus and automatic tracking camera system having same |
US20130155182A1 (en) * | 2011-12-20 | 2013-06-20 | Motorola Solutions, Inc. | Methods and apparatus to compensate for overshoot of a desired field of vision by a remotely-controlled image capture device |
US20130259440A2 (en) * | 2005-06-23 | 2013-10-03 | Israel Aerospace Industries Ltd. | A system and method for tracking moving objects |
US8600157B2 (en) | 2010-08-13 | 2013-12-03 | Institute For Information Industry | Method, system and computer program product for object color correction |
US20140009624A1 (en) * | 2009-06-24 | 2014-01-09 | Sony Corporation | Movable-mechanical-section controlling device, method of controlling movable mechanical section, and program |
US20140015957A1 (en) * | 2011-03-15 | 2014-01-16 | Omron Corporation | Image processing device and image processing program |
US20140085545A1 (en) * | 2012-09-26 | 2014-03-27 | General Electric Company | System and method for detection and tracking of moving objects |
US20140232818A1 (en) * | 2013-02-19 | 2014-08-21 | Disney Enterprises, Inc. | Method and device for spherical resampling for video generation |
CN104008371A (en) * | 2014-05-22 | 2014-08-27 | 南京邮电大学 | Regional suspicious target tracking and recognizing method based on multiple cameras |
US20140267643A1 (en) * | 2013-03-15 | 2014-09-18 | Orcam Technologies Ltd. | Systems and methods for automatic control of a continuous action |
US8842202B2 (en) * | 2012-10-10 | 2014-09-23 | Nec Casio Mobile Communications Ltd. | Mobile terminal, method for adjusting magnification of camera and program |
US20150078618A1 (en) * | 2013-09-17 | 2015-03-19 | Electronics And Telecommunications Research Institute | System for tracking dangerous situation in cooperation with mobile device and method thereof |
US20150131858A1 (en) * | 2013-11-13 | 2015-05-14 | Fujitsu Limited | Tracking device and tracking method |
US20150229841A1 (en) * | 2012-09-18 | 2015-08-13 | Hangzhou Hikvision Digital Technology Co., Ltd. | Target tracking method and system for intelligent tracking high speed dome camera |
US9215467B2 (en) | 2008-11-17 | 2015-12-15 | Checkvideo Llc | Analytics-modulated coding of surveillance video |
US20160086462A1 (en) * | 2014-09-18 | 2016-03-24 | Honeywell International Inc. | Virtual Panoramic Thumbnail to Summarize and Visualize Video Content in Video Surveillance and in Connected Home Business |
US9319635B2 (en) | 2012-11-30 | 2016-04-19 | Pelco, Inc. | Window blanking for pan/tilt/zoom camera |
US9497388B2 (en) | 2010-12-17 | 2016-11-15 | Pelco, Inc. | Zooming factor computation |
CN106161882A (en) * | 2014-08-14 | 2016-11-23 | 韩华泰科株式会社 | Dome-type camera device |
US9544496B1 (en) | 2007-03-23 | 2017-01-10 | Proximex Corporation | Multi-video navigation |
US9544563B1 (en) | 2007-03-23 | 2017-01-10 | Proximex Corporation | Multi-video navigation system |
US20170155805A1 (en) * | 2015-11-26 | 2017-06-01 | Samsung Electronics Co., Ltd. | Method and apparatus for capturing an image of an object by tracking the object |
US20170251169A1 (en) * | 2014-06-03 | 2017-08-31 | Gopro, Inc. | Apparatus and methods for context based video data compression |
CN107147841A (en) * | 2017-04-25 | 2017-09-08 | 北京小鸟看看科技有限公司 | A kind of binocular camera method of adjustment, device and system |
US20170322551A1 (en) * | 2014-07-30 | 2017-11-09 | SZ DJI Technology Co., Ltd | Systems and methods for target tracking |
US10021286B2 (en) | 2011-11-14 | 2018-07-10 | Gopro, Inc. | Positioning apparatus for photographic and video imaging and recording and system utilizing the same |
US10166675B2 (en) | 2014-03-13 | 2019-01-01 | Brain Corporation | Trainable modular robotic apparatus |
US10218899B2 (en) | 2013-09-19 | 2019-02-26 | Canon Kabushiki Kaisha | Control method in image capture system, control apparatus and a non-transitory computer-readable storage medium |
US10325339B2 (en) * | 2016-04-26 | 2019-06-18 | Qualcomm Incorporated | Method and device for capturing image of traffic sign |
US10338460B2 (en) * | 2016-05-24 | 2019-07-02 | Compal Electronics, Inc. | Projection apparatus |
US10391628B2 (en) | 2014-03-13 | 2019-08-27 | Brain Corporation | Trainable modular robotic apparatus and methods |
US10445885B1 (en) | 2015-10-01 | 2019-10-15 | Intellivision Technologies Corp | Methods and systems for tracking objects in videos and images using a cost matrix |
US10462347B2 (en) | 2011-11-14 | 2019-10-29 | Gopro, Inc. | Positioning apparatus for photographic and video imaging and recording and system utilizing the same |
CN110413166A (en) * | 2019-07-02 | 2019-11-05 | 上海熙菱信息技术有限公司 | A kind of method of history video real time linear tracking |
CN110823218A (en) * | 2018-08-10 | 2020-02-21 | 极光飞行科学公司 | Object tracking system |
CN111034171A (en) * | 2017-09-26 | 2020-04-17 | 索尼半导体解决方案公司 | Information processing system |
US20200145585A1 (en) * | 2018-11-01 | 2020-05-07 | Hanwha Techwin Co., Ltd. | Video capturing device including cameras and video capturing system including the same |
US10807230B2 (en) | 2015-06-24 | 2020-10-20 | Brain Corporation | Bistatic object detection apparatus and methods |
CN112804940A (en) * | 2018-10-04 | 2021-05-14 | 伯恩森斯韦伯斯特(以色列)有限责任公司 | ENT tool using camera |
US11095825B1 (en) * | 2020-06-02 | 2021-08-17 | Vitalchat, Inc. | Camera pan, tilt, and zoom history |
CN113315902A (en) * | 2020-02-26 | 2021-08-27 | 深圳英飞拓科技股份有限公司 | Lens stretching control signal hedging optimization method under high-speed dome camera |
CN113691777A (en) * | 2021-08-18 | 2021-11-23 | 浙江大华技术股份有限公司 | Zoom tracking method and device for ball machine, storage medium and electronic device |
WO2022001407A1 (en) * | 2020-07-01 | 2022-01-06 | 海信视像科技股份有限公司 | Camera control method and display device |
US11228737B2 (en) * | 2019-07-31 | 2022-01-18 | Ricoh Company, Ltd. | Output control apparatus, display terminal, remote control system, control method, and non-transitory computer-readable medium |
US11269331B2 (en) | 2018-07-20 | 2022-03-08 | May Mobility, Inc. | Multi-perspective system and method for behavioral policy selection by an autonomous agent |
US11321985B2 (en) * | 2017-10-10 | 2022-05-03 | Lumileds Llc | Counterfeit detection in bright environment |
US11352023B2 (en) | 2020-07-01 | 2022-06-07 | May Mobility, Inc. | Method and system for dynamically curating autonomous vehicle policies |
US11396302B2 (en) | 2020-12-14 | 2022-07-26 | May Mobility, Inc. | Autonomous vehicle safety platform system and method |
US20220321756A1 (en) * | 2021-02-26 | 2022-10-06 | Hill-Rom Services, Inc. | Patient monitoring system |
US11472436B1 (en) | 2021-04-02 | 2022-10-18 | May Mobility, Inc | Method and system for operating an autonomous agent with incomplete environmental information |
US11472444B2 (en) | 2020-12-17 | 2022-10-18 | May Mobility, Inc. | Method and system for dynamically updating an environmental representation of an autonomous agent |
US11513189B2 (en) * | 2019-02-15 | 2022-11-29 | May Mobility, Inc. | Systems and methods for intelligently calibrating infrastructure devices using onboard sensors of an autonomous agent |
US11565717B2 (en) | 2021-06-02 | 2023-01-31 | May Mobility, Inc. | Method and system for remote assistance of an autonomous agent |
US11585920B2 (en) * | 2017-12-28 | 2023-02-21 | Intel Corporation | Vehicle sensor fusion |
US11681896B2 (en) | 2017-03-17 | 2023-06-20 | The Regents Of The University Of Michigan | Method and apparatus for constructing informative outcomes to guide multi-policy decision making |
US11814072B2 (en) | 2022-02-14 | 2023-11-14 | May Mobility, Inc. | Method and system for conditional operation of an autonomous agent |
US11831955B2 (en) | 2010-07-12 | 2023-11-28 | Time Warner Cable Enterprises Llc | Apparatus and methods for content management and account linking across multiple content delivery networks |
US11847913B2 (en) | 2018-07-24 | 2023-12-19 | May Mobility, Inc. | Systems and methods for implementing multimodal safety operations with an autonomous agent |
US11849206B1 (en) * | 2022-02-23 | 2023-12-19 | Amazon Technologies, Inc. | Systems and methods for automated object identification and tracking |
US11956532B2 (en) * | 2021-08-31 | 2024-04-09 | Panasonic Intellectual Property Management Co., Ltd. | Displacement detection method, image-capturing instruction method, displacement detection device, and image-capturing instruction device |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7373012B2 (en) * | 2005-02-01 | 2008-05-13 | Mitsubishi Electric Research Laboratories, Inc. | Detecting moving objects in videos with corner-based background model |
DE102008001076A1 (en) | 2008-04-09 | 2009-10-15 | Robert Bosch Gmbh | Method, device and computer program for reducing the resolution of an input image |
DE102008042562A1 (en) | 2008-10-02 | 2010-04-08 | Robert Bosch Gmbh | Camera and corresponding method for selecting an object to be recorded |
EP2449760A1 (en) | 2009-06-29 | 2012-05-09 | Bosch Security Systems, Inc. | Omni-directional intelligent autotour and situational aware dome surveillance camera system and method |
CN102473182B (en) * | 2009-06-30 | 2015-07-22 | 皇家飞利浦电子股份有限公司 | Relevance feedback for content-based image retrieval |
KR101149329B1 (en) * | 2010-06-30 | 2012-05-23 | 아주대학교산학협력단 | Active object tracking device by using monitor camera and method |
WO2014071291A2 (en) | 2012-11-02 | 2014-05-08 | Strongwatch Corporation, Nevada C Corp | Wide area imaging system and method |
US20140133753A1 (en) * | 2012-11-09 | 2014-05-15 | Ge Aviation Systems Llc | Spectral scene simplification through background subtraction |
TWI562642B (en) * | 2015-06-24 | 2016-12-11 | Vivotek Inc | Image surveillance method and image surveillance device thereof |
GB2545900B (en) * | 2015-12-21 | 2020-08-12 | Canon Kk | Method, device, and computer program for re-identification of objects in images obtained from a plurality of cameras |
JP6943183B2 (en) * | 2018-01-05 | 2021-09-29 | オムロン株式会社 | Positioning device, position identification method, position identification program and camera device |
US20200090501A1 (en) * | 2018-09-19 | 2020-03-19 | International Business Machines Corporation | Accident avoidance system for pedestrians |
CN112788227B (en) * | 2019-11-07 | 2022-06-14 | 富泰华工业(深圳)有限公司 | Target tracking shooting method, target tracking shooting device, computer device and storage medium |
CN111060519A (en) * | 2019-12-30 | 2020-04-24 | 研祥智能科技股份有限公司 | LED support defect judgment method and system |
CN111698426B (en) * | 2020-06-23 | 2021-08-03 | 广东小天才科技有限公司 | Test question shooting method and device, electronic equipment and storage medium |
US11412133B1 (en) * | 2020-06-26 | 2022-08-09 | Amazon Technologies, Inc. | Autonomously motile device with computer vision |
US11417013B2 (en) * | 2020-10-13 | 2022-08-16 | Sensormatic Electornics, LLC | Iterative layout mapping via a stationary camera |
JP2022135081A (en) * | 2021-03-04 | 2022-09-15 | キヤノン株式会社 | Imaging apparatus, imaging control device, information processing device, and control method therefor |
US11601599B1 (en) * | 2021-03-22 | 2023-03-07 | Urban Sky Theory Inc. | Aerial image capture system with single axis camera rotation |
Citations (74)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4403256A (en) * | 1981-11-10 | 1983-09-06 | Cbs Inc. | Television picture stabilizing system |
US4410914A (en) * | 1982-02-01 | 1983-10-18 | Cbs Inc. | Television picture stabilizing system |
US4476494A (en) * | 1981-03-30 | 1984-10-09 | Jean Tugaye | Apparatus for centering and stabilizing the size of a target image |
US4897719A (en) * | 1987-03-19 | 1990-01-30 | Hugh Griffin | Image pre-processing sub-system |
US4945367A (en) * | 1988-03-02 | 1990-07-31 | Blackshear David M | Surveillance camera system |
US4959725A (en) * | 1988-07-13 | 1990-09-25 | Sony Corporation | Method and apparatus for processing camera an image produced by a video camera to correct for undesired motion of the video camera |
US5012347A (en) * | 1987-01-22 | 1991-04-30 | Antoine Fournier | Image stabilizing apparatus for a portable video camera |
US5237405A (en) * | 1990-05-21 | 1993-08-17 | Matsushita Electric Industrial Co., Ltd. | Image motion vector detecting device and swing correcting device |
US5264933A (en) * | 1991-07-19 | 1993-11-23 | Princeton Electronic Billboard, Inc. | Television displays having selected inserted indicia |
US5353392A (en) * | 1990-04-11 | 1994-10-04 | Multi Media Techniques | Method and device for modifying a zone in successive images |
US5371539A (en) * | 1991-10-18 | 1994-12-06 | Sanyo Electric Co., Ltd. | Video camera with electronic picture stabilizer |
US5430480A (en) * | 1992-06-30 | 1995-07-04 | Ricoh California Research Center | Sensor driven global motion compensation |
US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
US5438360A (en) * | 1992-09-08 | 1995-08-01 | Paul Howard Mayeux | Machine vision camera and video reprocessing system |
US5491517A (en) * | 1994-03-14 | 1996-02-13 | Scitex America Corporation | System for implanting an image into a video stream |
US5502482A (en) * | 1992-08-12 | 1996-03-26 | British Broadcasting Corporation | Derivation of studio camera position and motion from the camera image |
US5517236A (en) * | 1994-06-22 | 1996-05-14 | Philips Electronics North America Corporation | Video surveillance system |
US5528319A (en) * | 1993-10-13 | 1996-06-18 | Photran Corporation | Privacy filter for a display device |
US5552823A (en) * | 1992-02-15 | 1996-09-03 | Sony Corporation | Picture processing apparatus with object tracking |
US5563652A (en) * | 1993-06-28 | 1996-10-08 | Sanyo Electric Co., Ltd. | Video camera with electronic picture stabilizer |
US5608703A (en) * | 1994-12-26 | 1997-03-04 | Canon Kabushiki Kaisha | Image blur prevention apparatus |
US5610653A (en) * | 1992-02-07 | 1997-03-11 | Abecassis; Max | Method and system for automatically tracking a zoomed video image |
US5627616A (en) * | 1994-06-22 | 1997-05-06 | Philips Electronics North America Corporation | Surveillance camera system |
US5629984A (en) * | 1995-03-10 | 1997-05-13 | Sun Microsystems, Inc. | System and method for data security |
US5629988A (en) * | 1993-06-04 | 1997-05-13 | David Sarnoff Research Center, Inc. | System and method for electronic image stabilization |
US5754225A (en) * | 1995-10-05 | 1998-05-19 | Sony Corporation | Video camera system and automatic tracking method therefor |
US5798787A (en) * | 1995-08-11 | 1998-08-25 | Kabushiki Kaisha Toshiba | Method and apparatus for detecting an approaching object within a monitoring zone |
US5798786A (en) * | 1996-05-07 | 1998-08-25 | Recon/Optical, Inc. | Electro-optical imaging detector array for a moving vehicle which includes two axis image motion compensation and transfers pixels in row directions and column directions |
US5801770A (en) * | 1991-07-31 | 1998-09-01 | Sensormatic Electronics Corporation | Surveillance apparatus with enhanced control of camera and lens assembly |
US5835138A (en) * | 1995-08-30 | 1998-11-10 | Sony Corporation | Image signal processing apparatus and recording/reproducing apparatus |
US5909242A (en) * | 1993-06-29 | 1999-06-01 | Sanyo Electric Co., Ltd. | Video camera with electronic picture stabilizer |
US5953079A (en) * | 1992-03-24 | 1999-09-14 | British Broadcasting Corporation | Machine method for compensating for non-linear picture transformations, E.G. zoom and pan, in a video image motion compensation system |
US5963371A (en) * | 1998-02-04 | 1999-10-05 | Intel Corporation | Method of displaying private data to collocated users |
US5963248A (en) * | 1995-03-22 | 1999-10-05 | Sony Corporation | Automatic tracking/image sensing device |
US5969755A (en) * | 1996-02-05 | 1999-10-19 | Texas Instruments Incorporated | Motion based event detection system and method |
US5973733A (en) * | 1995-05-31 | 1999-10-26 | Texas Instruments Incorporated | Video stabilization system and method |
US5982420A (en) * | 1997-01-21 | 1999-11-09 | The United States Of America As Represented By The Secretary Of The Navy | Autotracking device designating a target |
US6067399A (en) * | 1998-09-02 | 2000-05-23 | Sony Corporation | Privacy mode for acquisition cameras and camcorders |
US6100925A (en) * | 1996-11-27 | 2000-08-08 | Princeton Video Image, Inc. | Image insertion in video streams using a combination of physical sensors and pattern recognition |
US6144405A (en) * | 1994-12-16 | 2000-11-07 | Sanyo Electric Company, Ltd. | Electronic picture stabilizer with movable detection areas and video camera utilizing the same |
US6154317A (en) * | 1996-10-11 | 2000-11-28 | Polytech Ab | Device for stabilizing of a remotely controlled sensor, like a camera |
US6173087B1 (en) * | 1996-11-13 | 2001-01-09 | Sarnoff Corporation | Multi-view image registration with application to mosaicing and lens distortion correction |
US6181345B1 (en) * | 1998-03-06 | 2001-01-30 | Symah Vision | Method and apparatus for replacing target zones in a video sequence |
US6208386B1 (en) * | 1995-09-08 | 2001-03-27 | Orad Hi-Tec Systems Limited | Method and apparatus for automatic electronic replacement of billboards in a video image |
US6208379B1 (en) * | 1996-02-20 | 2001-03-27 | Canon Kabushiki Kaisha | Camera display control and monitoring system |
US6211913B1 (en) * | 1998-03-23 | 2001-04-03 | Sarnoff Corporation | Apparatus and method for removing blank areas from real-time stabilized images by inserting background information |
US6211912B1 (en) * | 1994-02-04 | 2001-04-03 | Lucent Technologies Inc. | Method for detecting camera-motion induced scene changes |
US20010002843A1 (en) * | 1999-12-03 | 2001-06-07 | Kunio Yata | Automatic following device |
US6263088B1 (en) * | 1997-06-19 | 2001-07-17 | Ncr Corporation | System and method for tracking movement of objects in a scene |
US6295367B1 (en) * | 1997-06-19 | 2001-09-25 | Emtera Corporation | System and method for tracking movement of objects in a scene using correspondence graphs |
US20020008758A1 (en) * | 2000-03-10 | 2002-01-24 | Broemmelsiek Raymond M. | Method and apparatus for video surveillance with defined zones |
US20020051057A1 (en) * | 2000-10-26 | 2002-05-02 | Kunio Yata | Following device |
US6396961B1 (en) * | 1997-11-12 | 2002-05-28 | Sarnoff Corporation | Method and apparatus for fixating a camera on a target point using image alignment |
US6424370B1 (en) * | 1999-10-08 | 2002-07-23 | Texas Instruments Incorporated | Motion based event detection system and method |
US6437819B1 (en) * | 1999-06-25 | 2002-08-20 | Rohan Christopher Loveland | Automated video person tracking system |
US6441864B1 (en) * | 1996-11-12 | 2002-08-27 | Sony Corporation | Video signal processing device and method employing transformation matrix to generate composite image |
US6442474B1 (en) * | 2000-12-07 | 2002-08-27 | Koninklijke Philips Electronics N.V. | Vision-based method and apparatus for monitoring vehicular traffic events |
US20020140813A1 (en) * | 2001-03-28 | 2002-10-03 | Koninklijke Philips Electronics N.V. | Method for selecting a target in an automated video tracking system |
US20020140814A1 (en) * | 2001-03-28 | 2002-10-03 | Koninkiijke Philips Electronics N.V. | Method for assisting an automated video tracking system in reaquiring a target |
US6478425B2 (en) * | 2000-12-29 | 2002-11-12 | Koninlijke Phillip Electronics N. V. | System and method for automatically adjusting a lens power through gaze tracking |
US20020168091A1 (en) * | 2001-05-11 | 2002-11-14 | Miroslav Trajkovic | Motion detection via image alignment |
US20020167537A1 (en) * | 2001-05-11 | 2002-11-14 | Miroslav Trajkovic | Motion-based tracking with pan-tilt-zoom camera |
US6507366B1 (en) * | 1998-04-16 | 2003-01-14 | Samsung Electronics Co., Ltd. | Method and apparatus for automatically tracking a moving object |
US6509926B1 (en) * | 2000-02-17 | 2003-01-21 | Sensormatic Electronics Corporation | Surveillance apparatus for camera surveillance system |
US20030035051A1 (en) * | 2001-08-07 | 2003-02-20 | Samsung Electronics Co., Ltd. | Device for and method of automatically tracking a moving object |
US20030137589A1 (en) * | 2002-01-22 | 2003-07-24 | Kazunori Miyata | Video camera system |
US6628711B1 (en) * | 1999-07-02 | 2003-09-30 | Motorola, Inc. | Method and apparatus for compensating for jitter in a digital video image |
US20030227555A1 (en) * | 2002-06-06 | 2003-12-11 | Hitachi, Ltd. | Surveillance camera apparatus, surveillance camera system apparatus, and image-sensed picture masking method |
US6734901B1 (en) * | 1997-05-20 | 2004-05-11 | Canon Kabushiki Kaisha | Vibration correction apparatus |
US20040130628A1 (en) * | 2003-01-08 | 2004-07-08 | Stavely Donald J. | Apparatus and method for reducing image blur in a digital camera |
US6778210B1 (en) * | 1999-07-15 | 2004-08-17 | Olympus Optical Co., Ltd. | Image pickup apparatus with blur compensation |
US6781622B1 (en) * | 1998-06-26 | 2004-08-24 | Ricoh Company, Ltd. | Apparatus for correction based upon detecting a camera shaking |
US6809760B1 (en) * | 1998-06-12 | 2004-10-26 | Canon Kabushiki Kaisha | Camera control apparatus for controlling a plurality of cameras for tracking an object |
US20050157169A1 (en) * | 2004-01-20 | 2005-07-21 | Tomas Brodsky | Object blocking zones to reduce false alarms in video surveillance systems |
Family Cites Families (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3943561A (en) | 1973-08-06 | 1976-03-09 | Westinghouse Electric Corporation | System for optically detecting moving targets |
JPS62229082A (en) * | 1986-03-31 | 1987-10-07 | Toshiba Corp | Image detector |
JPH0797295B2 (en) | 1990-03-31 | 1995-10-18 | 株式会社島津製作所 | Target tracking system |
JP3182808B2 (en) * | 1991-09-20 | 2001-07-03 | 株式会社日立製作所 | Image processing system |
JPH05346958A (en) * | 1992-06-15 | 1993-12-27 | Matsushita Electric Ind Co Ltd | Mobile object tracking device |
USRE38420E1 (en) | 1992-08-12 | 2004-02-10 | British Broadcasting Corporation | Derivation of studio camera position and motion from the camera image |
SE502679C2 (en) | 1993-05-28 | 1995-12-04 | Saab Scania Combitech Ab | Method and apparatus for recording the movement of a vehicle on a ground |
JP3388833B2 (en) * | 1993-10-19 | 2003-03-24 | 株式会社応用計測研究所 | Measuring device for moving objects |
JPH08123784A (en) | 1994-10-28 | 1996-05-17 | Canon Inc | Method and device for processing data |
GB2305051B (en) | 1995-09-08 | 2000-06-21 | Orad Hi Tec Systems Ltd | Method and apparatus for automatic electronic replacement of billboards in a video image |
JP3487045B2 (en) * | 1995-10-30 | 2004-01-13 | 松下電工株式会社 | Automatic tracking device |
GB2316255B (en) | 1996-08-09 | 2000-05-31 | Roke Manor Research | Improvements in or relating to image stabilisation |
US6727938B1 (en) | 1997-04-14 | 2004-04-27 | Robert Bosch Gmbh | Security system with maskable motion detection and camera with an adjustable field of view |
US6760061B1 (en) * | 1997-04-14 | 2004-07-06 | Nestor Traffic Systems, Inc. | Traffic sensor |
US6208388B1 (en) | 1997-10-18 | 2001-03-27 | Lucent Technologies, Inc. | Channel responsive television input signal interface circuit and method |
JP2000083246A (en) | 1998-09-04 | 2000-03-21 | Canon Inc | Camera control system, camera control method, and recording medium stored with program to execute processing thereof |
JP4600894B2 (en) | 1999-08-20 | 2010-12-22 | パナソニック株式会社 | Video signal processing device |
JP3722653B2 (en) | 1999-08-31 | 2005-11-30 | 松下電器産業株式会社 | Surveillance camera device and display method of surveillance camera |
JP3440916B2 (en) | 2000-03-30 | 2003-08-25 | 日本電気株式会社 | Automatic tracking device, automatic tracking method, and recording medium recording automatic tracking program |
JP3603737B2 (en) | 2000-03-30 | 2004-12-22 | 日本電気株式会社 | Moving object tracking method and device |
JP4725693B2 (en) * | 2000-10-26 | 2011-07-13 | 富士フイルム株式会社 | Automatic tracking device |
US7382400B2 (en) | 2004-02-19 | 2008-06-03 | Robert Bosch Gmbh | Image stabilization system and method for a video camera |
US8212872B2 (en) | 2004-06-02 | 2012-07-03 | Robert Bosch Gmbh | Transformable privacy mask for video camera images |
JP5083712B2 (en) | 2007-12-21 | 2012-11-28 | 井関農機株式会社 | Vegetable seedling transplanter |
-
2002
- 2002-11-27 US US10/306,509 patent/US20040100563A1/en not_active Abandoned
-
2003
- 2003-11-26 EP EP03026979.9A patent/EP1427212B9/en not_active Expired - Lifetime
- 2003-11-27 JP JP2003397008A patent/JP4451122B2/en not_active Expired - Fee Related
-
2007
- 2007-04-18 JP JP2007109339A patent/JP5242938B2/en not_active Expired - Lifetime
-
2011
- 2011-09-30 US US13/249,536 patent/US9876993B2/en not_active Expired - Fee Related
Patent Citations (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4476494A (en) * | 1981-03-30 | 1984-10-09 | Jean Tugaye | Apparatus for centering and stabilizing the size of a target image |
US4403256A (en) * | 1981-11-10 | 1983-09-06 | Cbs Inc. | Television picture stabilizing system |
US4410914A (en) * | 1982-02-01 | 1983-10-18 | Cbs Inc. | Television picture stabilizing system |
US5012347A (en) * | 1987-01-22 | 1991-04-30 | Antoine Fournier | Image stabilizing apparatus for a portable video camera |
US4897719A (en) * | 1987-03-19 | 1990-01-30 | Hugh Griffin | Image pre-processing sub-system |
US4945367A (en) * | 1988-03-02 | 1990-07-31 | Blackshear David M | Surveillance camera system |
US4959725A (en) * | 1988-07-13 | 1990-09-25 | Sony Corporation | Method and apparatus for processing camera an image produced by a video camera to correct for undesired motion of the video camera |
US5353392A (en) * | 1990-04-11 | 1994-10-04 | Multi Media Techniques | Method and device for modifying a zone in successive images |
US5237405A (en) * | 1990-05-21 | 1993-08-17 | Matsushita Electric Industrial Co., Ltd. | Image motion vector detecting device and swing correcting device |
US5264933A (en) * | 1991-07-19 | 1993-11-23 | Princeton Electronic Billboard, Inc. | Television displays having selected inserted indicia |
US5801770A (en) * | 1991-07-31 | 1998-09-01 | Sensormatic Electronics Corporation | Surveillance apparatus with enhanced control of camera and lens assembly |
US5371539A (en) * | 1991-10-18 | 1994-12-06 | Sanyo Electric Co., Ltd. | Video camera with electronic picture stabilizer |
US5610653A (en) * | 1992-02-07 | 1997-03-11 | Abecassis; Max | Method and system for automatically tracking a zoomed video image |
US5552823A (en) * | 1992-02-15 | 1996-09-03 | Sony Corporation | Picture processing apparatus with object tracking |
US5953079A (en) * | 1992-03-24 | 1999-09-14 | British Broadcasting Corporation | Machine method for compensating for non-linear picture transformations, E.G. zoom and pan, in a video image motion compensation system |
US5430480A (en) * | 1992-06-30 | 1995-07-04 | Ricoh California Research Center | Sensor driven global motion compensation |
US5502482A (en) * | 1992-08-12 | 1996-03-26 | British Broadcasting Corporation | Derivation of studio camera position and motion from the camera image |
US5438360A (en) * | 1992-09-08 | 1995-08-01 | Paul Howard Mayeux | Machine vision camera and video reprocessing system |
US5629988A (en) * | 1993-06-04 | 1997-05-13 | David Sarnoff Research Center, Inc. | System and method for electronic image stabilization |
US5563652A (en) * | 1993-06-28 | 1996-10-08 | Sanyo Electric Co., Ltd. | Video camera with electronic picture stabilizer |
US5648815A (en) * | 1993-06-28 | 1997-07-15 | Sanyo Electric Co., Ltd. | Video camera with electronic picture stabilizer |
US5909242A (en) * | 1993-06-29 | 1999-06-01 | Sanyo Electric Co., Ltd. | Video camera with electronic picture stabilizer |
US5528319A (en) * | 1993-10-13 | 1996-06-18 | Photran Corporation | Privacy filter for a display device |
US6211912B1 (en) * | 1994-02-04 | 2001-04-03 | Lucent Technologies Inc. | Method for detecting camera-motion induced scene changes |
US5731846A (en) * | 1994-03-14 | 1998-03-24 | Scidel Technologies Ltd. | Method and system for perspectively distoring an image and implanting same into a video stream |
US5491517A (en) * | 1994-03-14 | 1996-02-13 | Scitex America Corporation | System for implanting an image into a video stream |
US5436672A (en) * | 1994-05-27 | 1995-07-25 | Symah Vision | Video processing system for modifying a zone in successive images |
US5517236A (en) * | 1994-06-22 | 1996-05-14 | Philips Electronics North America Corporation | Video surveillance system |
US5627616A (en) * | 1994-06-22 | 1997-05-06 | Philips Electronics North America Corporation | Surveillance camera system |
US6144405A (en) * | 1994-12-16 | 2000-11-07 | Sanyo Electric Company, Ltd. | Electronic picture stabilizer with movable detection areas and video camera utilizing the same |
US5608703A (en) * | 1994-12-26 | 1997-03-04 | Canon Kabushiki Kaisha | Image blur prevention apparatus |
US5629984A (en) * | 1995-03-10 | 1997-05-13 | Sun Microsystems, Inc. | System and method for data security |
US5963248A (en) * | 1995-03-22 | 1999-10-05 | Sony Corporation | Automatic tracking/image sensing device |
US5973733A (en) * | 1995-05-31 | 1999-10-26 | Texas Instruments Incorporated | Video stabilization system and method |
US5798787A (en) * | 1995-08-11 | 1998-08-25 | Kabushiki Kaisha Toshiba | Method and apparatus for detecting an approaching object within a monitoring zone |
US5926212A (en) * | 1995-08-30 | 1999-07-20 | Sony Corporation | Image signal processing apparatus and recording/reproducing apparatus |
US5835138A (en) * | 1995-08-30 | 1998-11-10 | Sony Corporation | Image signal processing apparatus and recording/reproducing apparatus |
US6384871B1 (en) * | 1995-09-08 | 2002-05-07 | Orad Hi-Tec Systems Limited | Method and apparatus for automatic electronic replacement of billboards in a video image |
US6208386B1 (en) * | 1995-09-08 | 2001-03-27 | Orad Hi-Tec Systems Limited | Method and apparatus for automatic electronic replacement of billboards in a video image |
US5754225A (en) * | 1995-10-05 | 1998-05-19 | Sony Corporation | Video camera system and automatic tracking method therefor |
US5969755A (en) * | 1996-02-05 | 1999-10-19 | Texas Instruments Incorporated | Motion based event detection system and method |
US6208379B1 (en) * | 1996-02-20 | 2001-03-27 | Canon Kabushiki Kaisha | Camera display control and monitoring system |
US5798786A (en) * | 1996-05-07 | 1998-08-25 | Recon/Optical, Inc. | Electro-optical imaging detector array for a moving vehicle which includes two axis image motion compensation and transfers pixels in row directions and column directions |
US6154317A (en) * | 1996-10-11 | 2000-11-28 | Polytech Ab | Device for stabilizing of a remotely controlled sensor, like a camera |
US6441864B1 (en) * | 1996-11-12 | 2002-08-27 | Sony Corporation | Video signal processing device and method employing transformation matrix to generate composite image |
US6173087B1 (en) * | 1996-11-13 | 2001-01-09 | Sarnoff Corporation | Multi-view image registration with application to mosaicing and lens distortion correction |
US6100925A (en) * | 1996-11-27 | 2000-08-08 | Princeton Video Image, Inc. | Image insertion in video streams using a combination of physical sensors and pattern recognition |
US5982420A (en) * | 1997-01-21 | 1999-11-09 | The United States Of America As Represented By The Secretary Of The Navy | Autotracking device designating a target |
US6734901B1 (en) * | 1997-05-20 | 2004-05-11 | Canon Kabushiki Kaisha | Vibration correction apparatus |
US6263088B1 (en) * | 1997-06-19 | 2001-07-17 | Ncr Corporation | System and method for tracking movement of objects in a scene |
US6295367B1 (en) * | 1997-06-19 | 2001-09-25 | Emtera Corporation | System and method for tracking movement of objects in a scene using correspondence graphs |
US6396961B1 (en) * | 1997-11-12 | 2002-05-28 | Sarnoff Corporation | Method and apparatus for fixating a camera on a target point using image alignment |
US5963371A (en) * | 1998-02-04 | 1999-10-05 | Intel Corporation | Method of displaying private data to collocated users |
US6181345B1 (en) * | 1998-03-06 | 2001-01-30 | Symah Vision | Method and apparatus for replacing target zones in a video sequence |
US6211913B1 (en) * | 1998-03-23 | 2001-04-03 | Sarnoff Corporation | Apparatus and method for removing blank areas from real-time stabilized images by inserting background information |
US6507366B1 (en) * | 1998-04-16 | 2003-01-14 | Samsung Electronics Co., Ltd. | Method and apparatus for automatically tracking a moving object |
US6809760B1 (en) * | 1998-06-12 | 2004-10-26 | Canon Kabushiki Kaisha | Camera control apparatus for controlling a plurality of cameras for tracking an object |
US6781622B1 (en) * | 1998-06-26 | 2004-08-24 | Ricoh Company, Ltd. | Apparatus for correction based upon detecting a camera shaking |
US6067399A (en) * | 1998-09-02 | 2000-05-23 | Sony Corporation | Privacy mode for acquisition cameras and camcorders |
US6437819B1 (en) * | 1999-06-25 | 2002-08-20 | Rohan Christopher Loveland | Automated video person tracking system |
US6628711B1 (en) * | 1999-07-02 | 2003-09-30 | Motorola, Inc. | Method and apparatus for compensating for jitter in a digital video image |
US6778210B1 (en) * | 1999-07-15 | 2004-08-17 | Olympus Optical Co., Ltd. | Image pickup apparatus with blur compensation |
US6424370B1 (en) * | 1999-10-08 | 2002-07-23 | Texas Instruments Incorporated | Motion based event detection system and method |
US20010002843A1 (en) * | 1999-12-03 | 2001-06-07 | Kunio Yata | Automatic following device |
US6509926B1 (en) * | 2000-02-17 | 2003-01-21 | Sensormatic Electronics Corporation | Surveillance apparatus for camera surveillance system |
US20020008758A1 (en) * | 2000-03-10 | 2002-01-24 | Broemmelsiek Raymond M. | Method and apparatus for video surveillance with defined zones |
US20020030741A1 (en) * | 2000-03-10 | 2002-03-14 | Broemmelsiek Raymond M. | Method and apparatus for object surveillance with a movable camera |
US20020051057A1 (en) * | 2000-10-26 | 2002-05-02 | Kunio Yata | Following device |
US6442474B1 (en) * | 2000-12-07 | 2002-08-27 | Koninklijke Philips Electronics N.V. | Vision-based method and apparatus for monitoring vehicular traffic events |
US6478425B2 (en) * | 2000-12-29 | 2002-11-12 | Koninlijke Phillip Electronics N. V. | System and method for automatically adjusting a lens power through gaze tracking |
US20020140814A1 (en) * | 2001-03-28 | 2002-10-03 | Koninkiijke Philips Electronics N.V. | Method for assisting an automated video tracking system in reaquiring a target |
US20020140813A1 (en) * | 2001-03-28 | 2002-10-03 | Koninklijke Philips Electronics N.V. | Method for selecting a target in an automated video tracking system |
US20020167537A1 (en) * | 2001-05-11 | 2002-11-14 | Miroslav Trajkovic | Motion-based tracking with pan-tilt-zoom camera |
US20020168091A1 (en) * | 2001-05-11 | 2002-11-14 | Miroslav Trajkovic | Motion detection via image alignment |
US20030035051A1 (en) * | 2001-08-07 | 2003-02-20 | Samsung Electronics Co., Ltd. | Device for and method of automatically tracking a moving object |
US20030137589A1 (en) * | 2002-01-22 | 2003-07-24 | Kazunori Miyata | Video camera system |
US20030227555A1 (en) * | 2002-06-06 | 2003-12-11 | Hitachi, Ltd. | Surveillance camera apparatus, surveillance camera system apparatus, and image-sensed picture masking method |
US20040130628A1 (en) * | 2003-01-08 | 2004-07-08 | Stavely Donald J. | Apparatus and method for reducing image blur in a digital camera |
US20050157169A1 (en) * | 2004-01-20 | 2005-07-21 | Tomas Brodsky | Object blocking zones to reduce false alarms in video surveillance systems |
Cited By (156)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7650058B1 (en) | 2001-11-08 | 2010-01-19 | Cernium Corporation | Object selective video recording |
US20040233282A1 (en) * | 2003-05-22 | 2004-11-25 | Stavely Donald J. | Systems, apparatus, and methods for surveillance of an area |
US7397504B2 (en) * | 2003-06-24 | 2008-07-08 | Microsoft Corp. | Whiteboard view camera |
US20040263646A1 (en) * | 2003-06-24 | 2004-12-30 | Microsoft Corporation | Whiteboard view camera |
US20050041139A1 (en) * | 2003-08-05 | 2005-02-24 | Lowles Robert J. | Method for automatic backlight adjustment |
US20060203098A1 (en) * | 2004-02-19 | 2006-09-14 | Henninger Paul E Iii | Method and apparatus for producing frame accurate position data in a PTZ dome camera with open loop control |
US7643066B2 (en) * | 2004-02-19 | 2010-01-05 | Robert Bosch Gmbh | Method and apparatus for producing frame accurate position data in a PTZ dome camera with open loop control |
US9432632B2 (en) | 2004-09-17 | 2016-08-30 | Proximex Corporation | Adaptive multi-modal integrated biometric identification and surveillance systems |
US8976237B2 (en) | 2004-09-17 | 2015-03-10 | Proximex Corporation | Adaptive multi-modal integrated biometric identification detection and surveillance systems |
US7956890B2 (en) | 2004-09-17 | 2011-06-07 | Proximex Corporation | Adaptive multi-modal integrated biometric identification detection and surveillance systems |
US20130259440A2 (en) * | 2005-06-23 | 2013-10-03 | Israel Aerospace Industries Ltd. | A system and method for tracking moving objects |
US20070035623A1 (en) * | 2005-07-22 | 2007-02-15 | Cernium Corporation | Directed attention digital video recordation |
US8026945B2 (en) | 2005-07-22 | 2011-09-27 | Cernium Corporation | Directed attention digital video recordation |
US8587655B2 (en) | 2005-07-22 | 2013-11-19 | Checkvideo Llc | Directed attention digital video recordation |
US20070154088A1 (en) * | 2005-09-16 | 2007-07-05 | King-Shy Goh | Robust Perceptual Color Identification |
US20070146484A1 (en) * | 2005-11-16 | 2007-06-28 | Joshua Horton | Automated video system for context-appropriate object tracking |
US20070286456A1 (en) * | 2006-06-12 | 2007-12-13 | Honeywell International Inc. | Static camera tracking system |
US7907750B2 (en) * | 2006-06-12 | 2011-03-15 | Honeywell International Inc. | System and method for autonomous object tracking |
US9491359B2 (en) | 2006-09-01 | 2016-11-08 | Canon Kabushiki Kaisha | Automatic-tracking camera apparatus |
US8174580B2 (en) * | 2006-09-01 | 2012-05-08 | Canon Kabushiki Kaisha | Automatic-tracking camera apparatus |
US20080055413A1 (en) * | 2006-09-01 | 2008-03-06 | Canon Kabushiki Kaisha | Automatic-tracking camera apparatus |
US7839431B2 (en) | 2006-10-19 | 2010-11-23 | Robert Bosch Gmbh | Image processing system and method for improving repeatability |
EP2511876A1 (en) | 2006-10-19 | 2012-10-17 | Robert Bosch Gmbh | Camera assembly for improving repeatability |
US20080094480A1 (en) * | 2006-10-19 | 2008-04-24 | Robert Bosch Gmbh | Image processing system and method for improving repeatability |
US8031254B2 (en) * | 2006-11-22 | 2011-10-04 | Canon Kabushiki Kaisha | Optical device, imaging device, control method for optical device, and program |
US20080117326A1 (en) * | 2006-11-22 | 2008-05-22 | Canon Kabushiki Kaisha | Optical device, imaging device, control method for optical device, and program |
US20080122958A1 (en) * | 2006-11-29 | 2008-05-29 | Honeywell International Inc. | Method and system for automatically determining the camera field of view in a camera network |
US8792005B2 (en) * | 2006-11-29 | 2014-07-29 | Honeywell International Inc. | Method and system for automatically determining the camera field of view in a camera network |
US20080225127A1 (en) * | 2007-03-12 | 2008-09-18 | Samsung Electronics Co., Ltd. | Digital image stabilization method for correcting horizontal inclination distortion and vertical scaling distortion |
US7999856B2 (en) * | 2007-03-12 | 2011-08-16 | Samsung Electronics Co., Ltd. | Digital image stabilization method for correcting horizontal inclination distortion and vertical scaling distortion |
US10484611B2 (en) * | 2007-03-23 | 2019-11-19 | Sensormatic Electronics, LLC | Multi-video navigation |
US9544563B1 (en) | 2007-03-23 | 2017-01-10 | Proximex Corporation | Multi-video navigation system |
US9544496B1 (en) | 2007-03-23 | 2017-01-10 | Proximex Corporation | Multi-video navigation |
US20170085803A1 (en) * | 2007-03-23 | 2017-03-23 | Proximex Corporation | Multi-video navigation |
US10326940B2 (en) | 2007-03-23 | 2019-06-18 | Proximex Corporation | Multi-video navigation system |
US20100008539A1 (en) * | 2007-05-07 | 2010-01-14 | Johnson Robert A | Systems and methods for improved target tracking for tactical imaging |
US7916895B2 (en) * | 2007-05-07 | 2011-03-29 | Harris Corporation | Systems and methods for improved target tracking for tactical imaging |
US20090154565A1 (en) * | 2007-12-12 | 2009-06-18 | Samsung Electronics Co., Ltd. | Video data compression method, medium, and system |
US20100322476A1 (en) * | 2007-12-13 | 2010-12-23 | Neeraj Krantiveer Kanhere | Vision based real time traffic monitoring |
US8379926B2 (en) * | 2007-12-13 | 2013-02-19 | Clemson University | Vision based real time traffic monitoring |
US8687065B2 (en) * | 2008-02-06 | 2014-04-01 | International Business Machines Corporation | Virtual fence |
US20090195654A1 (en) * | 2008-02-06 | 2009-08-06 | Connell Ii Jonathan H | Virtual fence |
US8390685B2 (en) * | 2008-02-06 | 2013-03-05 | International Business Machines Corporation | Virtual fence |
US20090207247A1 (en) * | 2008-02-15 | 2009-08-20 | Jeffrey Zampieron | Hybrid remote digital recording and acquisition system |
US8345097B2 (en) * | 2008-02-15 | 2013-01-01 | Harris Corporation | Hybrid remote digital recording and acquisition system |
WO2009122416A2 (en) * | 2008-04-02 | 2009-10-08 | Evt Technologies Ltd. | Object content navigation |
WO2009122416A3 (en) * | 2008-04-02 | 2010-03-18 | Evt Technologies Ltd. | System for monitoring a surveillance target by navigating video stream content |
US9398266B2 (en) * | 2008-04-02 | 2016-07-19 | Hernan Carzalo | Object content navigation |
US20090251537A1 (en) * | 2008-04-02 | 2009-10-08 | David Keidar | Object content navigation |
US20090251539A1 (en) * | 2008-04-04 | 2009-10-08 | Canon Kabushiki Kaisha | Monitoring device |
US9224279B2 (en) * | 2008-04-04 | 2015-12-29 | Canon Kabushiki Kaisha | Tour monitoring device |
US8385688B2 (en) * | 2008-08-27 | 2013-02-26 | International Business Machines Corporation | System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans |
US20100054525A1 (en) * | 2008-08-27 | 2010-03-04 | Leiguang Gong | System and method for automatic recognition and labeling of anatomical structures and vessels in medical imaging scans |
US9215467B2 (en) | 2008-11-17 | 2015-12-15 | Checkvideo Llc | Analytics-modulated coding of surveillance video |
US11172209B2 (en) | 2008-11-17 | 2021-11-09 | Checkvideo Llc | Analytics-modulated coding of surveillance video |
US20100194882A1 (en) * | 2009-01-30 | 2010-08-05 | Ajit Belsarkar | Method and apparatus for monitoring using a movable video device |
US8754940B2 (en) | 2009-01-30 | 2014-06-17 | Robert Bosch Gmbh | Method and apparatus for monitoring using a movable video device |
US20140009624A1 (en) * | 2009-06-24 | 2014-01-09 | Sony Corporation | Movable-mechanical-section controlling device, method of controlling movable mechanical section, and program |
US9712735B2 (en) * | 2009-06-24 | 2017-07-18 | Sony Corporation | Movable-mechanical-section controlling device, method of controlling movable mechanical section, and program |
US8908916B2 (en) * | 2009-07-29 | 2014-12-09 | Sony Corporation | Control apparatus, control method, and program to search for a subject and automatically perform image-recording |
US20120121134A1 (en) * | 2009-07-29 | 2012-05-17 | Sony Corporation | Control apparatus, control method, and program |
CN102550016A (en) * | 2009-07-29 | 2012-07-04 | 索尼公司 | Control device, control method, and program |
US8531525B2 (en) * | 2009-12-22 | 2013-09-10 | Utc Fire & Security Americas Corporation, Inc. | Surveillance system and method for operating same |
US20110149072A1 (en) * | 2009-12-22 | 2011-06-23 | Mccormack Kenneth | Surveillance system and method for operating same |
US20110175999A1 (en) * | 2010-01-15 | 2011-07-21 | Mccormack Kenneth | Video system and method for operating same |
US20110205355A1 (en) * | 2010-02-19 | 2011-08-25 | Panasonic Corporation | Data Mining Method and System For Estimating Relative 3D Velocity and Acceleration Projection Functions Based on 2D Motions |
US11831955B2 (en) | 2010-07-12 | 2023-11-28 | Time Warner Cable Enterprises Llc | Apparatus and methods for content management and account linking across multiple content delivery networks |
EP3116214A1 (en) * | 2010-07-26 | 2017-01-11 | Canon Kabushiki Kaisha | Control apparatus for auto-tracking camera system and auto-tracking camera system equipped with same |
US20120019664A1 (en) * | 2010-07-26 | 2012-01-26 | Canon Kabushiki Kaisha | Control apparatus for auto-tracking camera system and auto-tracking camera system equipped with same |
US8600157B2 (en) | 2010-08-13 | 2013-12-03 | Institute For Information Industry | Method, system and computer program product for object color correction |
US8692884B2 (en) * | 2010-09-17 | 2014-04-08 | Gish Technology, Inc. | Apparatus and method for assessing visual acuity |
US20120069179A1 (en) * | 2010-09-17 | 2012-03-22 | Gish Kurt A | Apparatus and method for assessing visual acuity |
US20120120248A1 (en) * | 2010-11-16 | 2012-05-17 | Electronics And Telecommunications Research Institute | Image photographing device and security management device of object tracking system and object tracking method |
US9497388B2 (en) | 2010-12-17 | 2016-11-15 | Pelco, Inc. | Zooming factor computation |
US9571795B2 (en) * | 2011-03-15 | 2017-02-14 | Omron Corporation | Image processing device and image processing program |
US20140015957A1 (en) * | 2011-03-15 | 2014-01-16 | Omron Corporation | Image processing device and image processing program |
US9438783B2 (en) * | 2011-04-20 | 2016-09-06 | Canon Kabushiki Kaisha | Automatic tracking control apparatus for camera apparatus and automatic tracking camera system having same |
US20120268608A1 (en) * | 2011-04-20 | 2012-10-25 | Canon Kabushiki Kaisha | Automatic tracking control apparatus for camera apparatus and automatic tracking camera system having same |
US10791257B2 (en) | 2011-11-14 | 2020-09-29 | Gopro, Inc. | Positioning apparatus for photographic and video imaging and recording and system utilizing the same |
US10462347B2 (en) | 2011-11-14 | 2019-10-29 | Gopro, Inc. | Positioning apparatus for photographic and video imaging and recording and system utilizing the same |
US10021286B2 (en) | 2011-11-14 | 2018-07-10 | Gopro, Inc. | Positioning apparatus for photographic and video imaging and recording and system utilizing the same |
US11489995B2 (en) | 2011-11-14 | 2022-11-01 | Gopro, Inc. | Positioning apparatus for photographic and video imaging and recording and system utilizing the same |
US20130155182A1 (en) * | 2011-12-20 | 2013-06-20 | Motorola Solutions, Inc. | Methods and apparatus to compensate for overshoot of a desired field of vision by a remotely-controlled image capture device |
US9413941B2 (en) * | 2011-12-20 | 2016-08-09 | Motorola Solutions, Inc. | Methods and apparatus to compensate for overshoot of a desired field of vision by a remotely-controlled image capture device |
US9955074B2 (en) * | 2012-09-18 | 2018-04-24 | Hangzhou Hikvision Digital Technology Co., Ltd. | Target tracking method and system for intelligent tracking high speed dome camera |
US20150229841A1 (en) * | 2012-09-18 | 2015-08-13 | Hangzhou Hikvision Digital Technology Co., Ltd. | Target tracking method and system for intelligent tracking high speed dome camera |
US20140085545A1 (en) * | 2012-09-26 | 2014-03-27 | General Electric Company | System and method for detection and tracking of moving objects |
US9465997B2 (en) * | 2012-09-26 | 2016-10-11 | General Electric Company | System and method for detection and tracking of moving objects |
US8842202B2 (en) * | 2012-10-10 | 2014-09-23 | Nec Casio Mobile Communications Ltd. | Mobile terminal, method for adjusting magnification of camera and program |
US9319635B2 (en) | 2012-11-30 | 2016-04-19 | Pelco, Inc. | Window blanking for pan/tilt/zoom camera |
US20140232818A1 (en) * | 2013-02-19 | 2014-08-21 | Disney Enterprises, Inc. | Method and device for spherical resampling for video generation |
US10165157B2 (en) * | 2013-02-19 | 2018-12-25 | Disney Enterprises, Inc. | Method and device for hybrid robotic/virtual pan-tilt-zoom cameras for autonomous event recording |
US20140267643A1 (en) * | 2013-03-15 | 2014-09-18 | Orcam Technologies Ltd. | Systems and methods for automatic control of a continuous action |
US8908021B2 (en) * | 2013-03-15 | 2014-12-09 | Orcam Technologies Ltd. | Systems and methods for automatic control of a continuous action |
US20150078618A1 (en) * | 2013-09-17 | 2015-03-19 | Electronics And Telecommunications Research Institute | System for tracking dangerous situation in cooperation with mobile device and method thereof |
US10218899B2 (en) | 2013-09-19 | 2019-02-26 | Canon Kabushiki Kaisha | Control method in image capture system, control apparatus and a non-transitory computer-readable storage medium |
US20150131858A1 (en) * | 2013-11-13 | 2015-05-14 | Fujitsu Limited | Tracking device and tracking method |
US9734395B2 (en) * | 2013-11-13 | 2017-08-15 | Fujitsu Limited | Tracking device and tracking method |
US10166675B2 (en) | 2014-03-13 | 2019-01-01 | Brain Corporation | Trainable modular robotic apparatus |
US10391628B2 (en) | 2014-03-13 | 2019-08-27 | Brain Corporation | Trainable modular robotic apparatus and methods |
CN104008371A (en) * | 2014-05-22 | 2014-08-27 | 南京邮电大学 | Regional suspicious target tracking and recognizing method based on multiple cameras |
US20170251169A1 (en) * | 2014-06-03 | 2017-08-31 | Gopro, Inc. | Apparatus and methods for context based video data compression |
US20170322551A1 (en) * | 2014-07-30 | 2017-11-09 | SZ DJI Technology Co., Ltd | Systems and methods for target tracking |
US11194323B2 (en) | 2014-07-30 | 2021-12-07 | SZ DJI Technology Co., Ltd. | Systems and methods for target tracking |
US11106201B2 (en) * | 2014-07-30 | 2021-08-31 | SZ DJI Technology Co., Ltd. | Systems and methods for target tracking |
US10178283B2 (en) * | 2014-08-14 | 2019-01-08 | Hanwha Technwin Co., Ltd. | Dome camera device |
US20190116299A1 (en) * | 2014-08-14 | 2019-04-18 | Hanwha Techwin Co., Ltd. | Dome camera device |
US20170353635A1 (en) * | 2014-08-14 | 2017-12-07 | Hanwha Techwin Co., Ltd. | Dome camera device |
US10848653B2 (en) * | 2014-08-14 | 2020-11-24 | Hanwha Techwin Co., Ltd. | Dome camera device |
CN106161882A (en) * | 2014-08-14 | 2016-11-23 | 韩华泰科株式会社 | Dome-type camera device |
US10176683B2 (en) * | 2014-09-18 | 2019-01-08 | Honeywell International Inc. | Virtual panoramic thumbnail to summarize and visualize video content in video surveillance and in connected home business |
US20160086462A1 (en) * | 2014-09-18 | 2016-03-24 | Honeywell International Inc. | Virtual Panoramic Thumbnail to Summarize and Visualize Video Content in Video Surveillance and in Connected Home Business |
US10807230B2 (en) | 2015-06-24 | 2020-10-20 | Brain Corporation | Bistatic object detection apparatus and methods |
US10445885B1 (en) | 2015-10-01 | 2019-10-15 | Intellivision Technologies Corp | Methods and systems for tracking objects in videos and images using a cost matrix |
US20170155805A1 (en) * | 2015-11-26 | 2017-06-01 | Samsung Electronics Co., Ltd. | Method and apparatus for capturing an image of an object by tracking the object |
US10244150B2 (en) * | 2015-11-26 | 2019-03-26 | Samsung Electronics Co., Ltd. | Method and apparatus for capturing an image of an object by tracking the object |
US10325339B2 (en) * | 2016-04-26 | 2019-06-18 | Qualcomm Incorporated | Method and device for capturing image of traffic sign |
US10338460B2 (en) * | 2016-05-24 | 2019-07-02 | Compal Electronics, Inc. | Projection apparatus |
US11681896B2 (en) | 2017-03-17 | 2023-06-20 | The Regents Of The University Of Michigan | Method and apparatus for constructing informative outcomes to guide multi-policy decision making |
CN107147841A (en) * | 2017-04-25 | 2017-09-08 | 北京小鸟看看科技有限公司 | A kind of binocular camera method of adjustment, device and system |
CN111034171A (en) * | 2017-09-26 | 2020-04-17 | 索尼半导体解决方案公司 | Information processing system |
US11321985B2 (en) * | 2017-10-10 | 2022-05-03 | Lumileds Llc | Counterfeit detection in bright environment |
US11585920B2 (en) * | 2017-12-28 | 2023-02-21 | Intel Corporation | Vehicle sensor fusion |
US11269331B2 (en) | 2018-07-20 | 2022-03-08 | May Mobility, Inc. | Multi-perspective system and method for behavioral policy selection by an autonomous agent |
US11269332B2 (en) | 2018-07-20 | 2022-03-08 | May Mobility, Inc. | Multi-perspective system and method for behavioral policy selection by an autonomous agent |
US11847913B2 (en) | 2018-07-24 | 2023-12-19 | May Mobility, Inc. | Systems and methods for implementing multimodal safety operations with an autonomous agent |
CN110823218A (en) * | 2018-08-10 | 2020-02-21 | 极光飞行科学公司 | Object tracking system |
CN112804940A (en) * | 2018-10-04 | 2021-05-14 | 伯恩森斯韦伯斯特(以色列)有限责任公司 | ENT tool using camera |
US10979645B2 (en) * | 2018-11-01 | 2021-04-13 | Hanwha Techwin Co., Ltd. | Video capturing device including cameras and video capturing system including the same |
US20200145585A1 (en) * | 2018-11-01 | 2020-05-07 | Hanwha Techwin Co., Ltd. | Video capturing device including cameras and video capturing system including the same |
US11525887B2 (en) * | 2019-02-15 | 2022-12-13 | May Mobility, Inc. | Systems and methods for intelligently calibrating infrastructure devices using onboard sensors of an autonomous agent |
US11513189B2 (en) * | 2019-02-15 | 2022-11-29 | May Mobility, Inc. | Systems and methods for intelligently calibrating infrastructure devices using onboard sensors of an autonomous agent |
CN110413166A (en) * | 2019-07-02 | 2019-11-05 | 上海熙菱信息技术有限公司 | A kind of method of history video real time linear tracking |
US11228737B2 (en) * | 2019-07-31 | 2022-01-18 | Ricoh Company, Ltd. | Output control apparatus, display terminal, remote control system, control method, and non-transitory computer-readable medium |
US20220124288A1 (en) * | 2019-07-31 | 2022-04-21 | Ricoh Company, Ltd. | Output control apparatus, display terminal, remote control system, control method, and non-transitory computer-readable medium |
CN113315902A (en) * | 2020-02-26 | 2021-08-27 | 深圳英飞拓科技股份有限公司 | Lens stretching control signal hedging optimization method under high-speed dome camera |
US11095825B1 (en) * | 2020-06-02 | 2021-08-17 | Vitalchat, Inc. | Camera pan, tilt, and zoom history |
US11667306B2 (en) | 2020-07-01 | 2023-06-06 | May Mobility, Inc. | Method and system for dynamically curating autonomous vehicle policies |
WO2022001407A1 (en) * | 2020-07-01 | 2022-01-06 | 海信视像科技股份有限公司 | Camera control method and display device |
US11352023B2 (en) | 2020-07-01 | 2022-06-07 | May Mobility, Inc. | Method and system for dynamically curating autonomous vehicle policies |
US11565716B2 (en) | 2020-07-01 | 2023-01-31 | May Mobility, Inc. | Method and system for dynamically curating autonomous vehicle policies |
US11679776B2 (en) | 2020-12-14 | 2023-06-20 | May Mobility, Inc. | Autonomous vehicle safety platform system and method |
US11396302B2 (en) | 2020-12-14 | 2022-07-26 | May Mobility, Inc. | Autonomous vehicle safety platform system and method |
US11673566B2 (en) | 2020-12-14 | 2023-06-13 | May Mobility, Inc. | Autonomous vehicle safety platform system and method |
US11673564B2 (en) | 2020-12-14 | 2023-06-13 | May Mobility, Inc. | Autonomous vehicle safety platform system and method |
US11472444B2 (en) | 2020-12-17 | 2022-10-18 | May Mobility, Inc. | Method and system for dynamically updating an environmental representation of an autonomous agent |
US20220321756A1 (en) * | 2021-02-26 | 2022-10-06 | Hill-Rom Services, Inc. | Patient monitoring system |
US11882366B2 (en) * | 2021-02-26 | 2024-01-23 | Hill-Rom Services, Inc. | Patient monitoring system |
US11845468B2 (en) | 2021-04-02 | 2023-12-19 | May Mobility, Inc. | Method and system for operating an autonomous agent with incomplete environmental information |
US11472436B1 (en) | 2021-04-02 | 2022-10-18 | May Mobility, Inc | Method and system for operating an autonomous agent with incomplete environmental information |
US11745764B2 (en) | 2021-04-02 | 2023-09-05 | May Mobility, Inc. | Method and system for operating an autonomous agent with incomplete environmental information |
US11565717B2 (en) | 2021-06-02 | 2023-01-31 | May Mobility, Inc. | Method and system for remote assistance of an autonomous agent |
CN113691777A (en) * | 2021-08-18 | 2021-11-23 | 浙江大华技术股份有限公司 | Zoom tracking method and device for ball machine, storage medium and electronic device |
US11956532B2 (en) * | 2021-08-31 | 2024-04-09 | Panasonic Intellectual Property Management Co., Ltd. | Displacement detection method, image-capturing instruction method, displacement detection device, and image-capturing instruction device |
US11814072B2 (en) | 2022-02-14 | 2023-11-14 | May Mobility, Inc. | Method and system for conditional operation of an autonomous agent |
US11849206B1 (en) * | 2022-02-23 | 2023-12-19 | Amazon Technologies, Inc. | Systems and methods for automated object identification and tracking |
Also Published As
Publication number | Publication date |
---|---|
US20120081552A1 (en) | 2012-04-05 |
JP4451122B2 (en) | 2010-04-14 |
US9876993B2 (en) | 2018-01-23 |
EP1427212B9 (en) | 2014-11-19 |
JP5242938B2 (en) | 2013-07-24 |
EP1427212A1 (en) | 2004-06-09 |
JP2007274703A (en) | 2007-10-18 |
JP2004180321A (en) | 2004-06-24 |
EP1427212B1 (en) | 2014-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9876993B2 (en) | Video tracking system and method | |
US7382400B2 (en) | Image stabilization system and method for a video camera | |
EP1914682B1 (en) | Image processing system and method for improving repeatability | |
US7742077B2 (en) | Image stabilization system and method for a video camera | |
KR100606485B1 (en) | Object tracking method and object tracking apparatus | |
US7385626B2 (en) | Method and system for performing surveillance | |
US7457433B2 (en) | System and method for analyzing video from non-static camera | |
US7643066B2 (en) | Method and apparatus for producing frame accurate position data in a PTZ dome camera with open loop control | |
US9363487B2 (en) | Scanning camera-based video surveillance system | |
US8170277B2 (en) | Automatic tracking apparatus and automatic tracking method | |
US20100013917A1 (en) | Method and system for performing surveillance | |
JP2002522980A (en) | Image tracking in multiple camera systems | |
JP2004519956A (en) | Target selection method for automatic video tracking system | |
WO2002093916A2 (en) | Attentive panoramic visual sensor | |
KR101204870B1 (en) | Surveillance camera system and method for controlling thereof | |
CN103167234A (en) | Method for setting up a monitoring camera | |
JP3828096B2 (en) | Object tracking device | |
KR100382792B1 (en) | Intelligent robotic camera and distributed control apparatus thereof | |
Nelson et al. | Dual camera zoom control: A study of zoom tracking stability | |
KR20230033033A (en) | System and method for determining the position of the camera image center point by the vanishing point position |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: BOSCH SECURITY SYSTEMS, INC., NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SABLAK, SEZAI;KATZ, DAVID N.;TRAJKOVIC, MIROSLAV;AND OTHERS;REEL/FRAME:013903/0636;SIGNING DATES FROM 20030106 TO 20030129 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |