Documentation

Point Feature Types

Image feature detection is a building block of many computer vision tasks, such as image registration, tracking, and object detection. The Computer Vision System Toolbox™ includes a variety of functions for image feature detection. These functions return points objects that store information specific to particular types of features, including (x,y) coordinates (in theLocationproperty). You can pass a points object from a detection function to a variety of other functions that require feature points as inputs. The algorithm that a detection function uses determines the type of points object it returns.

Functions That Return Points Objects

Points Object Returned By Type of Feature
cornerPoints detectFASTFeatures
Features from accelerated segment test (FAST) algorithm
Uses an approximate metric to determine corners.[1]

Corners
Single-scale detection
Point tracking, image registration with little or no scale change, corner detection in scenes of human origin, such as streets and indoor scenes.

detectMinEigenFeatures
Minimum eigenvalue algorithm
Uses minimum eigenvalue metric to determine corner locations.[4]
detectHarrisFeatures
Harris-Stephens algorithm
More efficient than the minimum eigenvalue algorithm.[3]
BRISKPoints detectBRISKFeatures
Binary Robust Invariant Scalable Keypoints (BRISK) algorithm[6]

Corners
Multiscale detection
Point tracking, image registration, handles changes in scale and rotation, corner detection in scenes of human origin, such as streets and indoor scenes

SURFPoints detectSURFFeatures
Speeded-up robust features (SURF) algorithm[11]

Blobs
Multiscale detection
Object detection and image registration with scale and rotation changes

KAZEPoints detectKAZEFeatures
KAZE is not an acronym, but a name derived from the Japanese wordkaze, which means wind. The reference is to the flow of air ruled by nonlinear processes on a large scale.[12]

Multi-scale blob features

Reduced blurring of object boundaries

MSERRegions

detectMSERFeatures
Maximally stable extremal regions (MSER) algorithm[7][8][9][10]

Regions of uniform intensity
Multi-scale detection
Registration, wide baseline stereo calibration, text detection, object detection. Handles changes to scale and rotation. More robust to affine transforms in contrast to other detectors.

Functions That Accept Points Objects

Function Description
relativeCameraPose

Compute relative rotation and translation between camera poses

estimateFundamentalMatrix Estimate fundamental matrix from corresponding points in stereo images
estimateGeometricTransform Estimate geometric transform from matching point pairs
estimateUncalibratedRectification Uncalibrated stereo rectification
extractFeatures Extract interest point descriptors
Method Feature Vector
BRISK The function sets theOrientationproperty of thevalidPointsoutput object to the orientation of the extracted features, in radians.
FREAK The function sets theOrientationproperty of thevalidPointsoutput object to the orientation of the extracted features, in radians.
SURF The function sets theOrientationproperty of thevalidPointsoutput object to the orientation of the extracted features, in radians.

When you use anMSERRegionsobject with theSURFmethod, theCentroidproperty of the object extracts SURF descriptors. TheAxesproperty of the object selects the scale of the SURF descriptors such that the circle representing the feature has an area proportional to the MSER ellipse area. The scale is calculated as1/4*sqrt((majorAxes/2).* (minorAxes/2))and saturated to1.6, as required by theSURFPointsobject.

KAZE Non-linear pyramid-based features.

The function sets theOrientationproperty of thevalidPointsoutput object to the orientation of the extracted features, in radians.

When you use anMSERRegionsobject with theKAZEmethod, theLocationproperty of the object is used to extract KAZE descriptors.

TheAxesproperty of the object selects the scale of the KAZE descriptors such that the circle representing the feature has an area proportional to the MSER ellipse area.

Block Simple square neighbhorhood.

TheBlockmethod extracts only the neighborhoods fully contained within the image boundary. Therefore, the output,validPoints, can contain fewer points than the inputPOINTS.

Auto The function selects theMethodbased on the class of the input points and implements:
TheFREAKmethod for acornerPointsinput object.
TheSURFmethod for aSURFPointsorMSERRegionsinput object.
TheFREAKmethod for aBRISKPointsinput object.

For anM-by-2 input matrix of [xy] coordinates, the function implements theBlockmethod.

extractHOGFeatures Extract histogram of oriented gradients (HOG) features
insertMarker Insert markers in image or video
showMatchedFeatures Display corresponding feature points
triangulate 3-D locations of undistorted matching points in stereo images
undistortPoints Correct point coordinates for lens distortion

References

[1] Rosten, E., and T. Drummond, “Machine Learning for High-Speed Corner Detection.” 9th European Conference on Computer Vision. Vol. 1, 2006, pp. 430–443.

[2] Mikolajczyk, K., and C. Schmid. “A performance evaluation of local descriptors.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 27, Issue 10, 2005, pp. 1615–1630.

[3] Harris, C., and M. J. Stephens. “A Combined Corner and Edge Detector.” Proceedings of the 4th Alvey Vision Conference. August 1988, pp. 147–152.

[4] Shi, J., and C. Tomasi. “Good Features to Track.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. June 1994, pp. 593–600.

[5] Tuytelaars, T。k . Mikolajczyk。“当地发票ariant Feature Detectors: A Survey.” Foundations and Trends in Computer Graphics and Vision. Vol. 3, Issue 3, 2007, pp. 177–280.

[6] Leutenegger, S., M. Chli, and R. Siegwart. “BRISK: Binary Robust Invariant Scalable Keypoints.” Proceedings of the IEEE International Conference. ICCV, 2011.

[7] Nister, D., and H. Stewenius. "Linear Time Maximally Stable Extremal Regions." Lecture Notes in Computer Science. 10th European Conference on Computer Vision. Marseille, France: 2008, no. 5303, pp. 183–196.

[8] Matas, J., O. Chum, M. Urba, and T. Pajdla. "Robust wide-baseline stereo from maximally stable extremal regions." Proceedings of British Machine Vision Conference. 2002, pp. 384–396.

[9] Obdrzalek D., S. Basovnik, L. Mach, and A. Mikulik. "Detecting Scene Elements Using Maximally Stable Colour Regions." Communications in Computer and Information Science. La Ferte-Bernard, France: 2009, Vol. 82 CCIS (2010 12 01), pp 107–115.

[10] Mikolajczyk, K., T. Tuytelaars, C. Schmid, A. Zisserman, T. Kadir, and L. Van Gool. "A Comparison of Affine Region Detectors." International Journal of Computer Vision. Vol. 65, No. 1–2, November, 2005, pp. 43–72 .

[11] Bay, H., A. Ess, T. Tuytelaars, and L. Van Gool. “SURF:Speeded Up Robust Features.” Computer Vision and Image Understanding (CVIU).Vol. 110, No. 3, 2008, pp. 346–359.

[12] Alcantarilla, P.F., A. Bartoli, and A.J. Davison. "KAZE Features", ECCV 2012, Part VI, LNCS 7577 pp. 214, 2012

Related Topics