Papers by Zulfiqar Hasan Khan

We propose a novel visual tracking scheme that exploits both the geometrical structure of Grassma... more We propose a novel visual tracking scheme that exploits both the geometrical structure of Grassmann manifold and piecewise geodesics under a Bayesian framework. Two particle filters are alternatingly employed on the manifold. One is used for online updating the appearance subspace on the manifold using sliding-window observations, and the other is for tracking moving objects on the manifold based on the dynamic shape and appearance models. Main contributions of the paper include: (a) proposing an online manifold learning strategy by a particle filter, where a mixture of dynamic models is used for both the changes of manifold bases in the tangent plane and the piecewise geodesics on the manifold, (b) proposing a manifold object tracker by incorporating object shape in the tangent plane and the manifold prediction error of object appearance jointly in a particle filter framework. Experiments performed on videos containing significant object pose changes show very robust tracking results. The proposed scheme also shows better performance as comparing with three existing trackers in terms of tracking drift and the tightness and accuracy of tracked boxes.

This paper addresses online learning of reference object distribution in the context of two hybri... more This paper addresses online learning of reference object distribution in the context of two hybrid tracking schemes that combine the mean shift with local point feature correspondences, and the mean shift under the Bayesian framework, respectively. The reference object distribution is built up by a kernel-weighted color histogram. The main contributions of the proposed schemes includes: (a) an adaptive learning strategy that seeks to update the reference object distribution when the changes are caused by the intrinsic object dynamic without partial occlusion/intersection; (b) novel dynamic maintenance of object feature points by exploring both foreground and background sets; (c) integration of adaptive appearance and local point features in joint object appearance similarity and local point features correspondences-based tracker to improve [7]; (d) integration of adaptive appearance in joint appearance similarity and particle filter tracker under the Bayesian framework to improve . Experimental results on a range of videos captured by a dynamic/stationary camera demonstrate the effectiveness of the proposed schemes in terms of robustness to partial occlusions, tracking drifts and tightness and accuracy of tracked bounding box. Comparisons are also made with the two hybrid trackers together with 3 existing trackers.

In this paper, we introduce a novel algorithm which builds upon the combined anisotropic mean-shi... more In this paper, we introduce a novel algorithm which builds upon the combined anisotropic mean-shift and particle filter framework. The anisotropic mean-shift [4] with 5 degrees of freedom, is extended to work on a partition of the object into concentric rings. This adds spatial information to the description of the object which makes the algorithm more resilient to occlusion and less likely to mistake the object with other objects having similar color densities. Experiments conducted on videos containing deformable objects with long-term partial occlusion (or, short-term full occlusion) and intersection have shown robust tracking performance, especially in tracking objects with long term partial occlusion, short term full occlusion, close color background clutter, severe object deformation and fast changing motion. Comparisons with two existing methods have shown marked improvement in terms of robustness to occlusions, tightness and accuracy of tracked box, and tracking drifts.

Journal of Signal Processing Systems
This paper addresses the issue of tracking a single visual object through crowded scenarios, wher... more This paper addresses the issue of tracking a single visual object through crowded scenarios, where a target object may be intersected or partially occluded by other objects for a long duration, experience severe deformation and pose changes, and different motion speed in cluttered background. A robust visual object tracking scheme is proposed that exploits the dynamics of object shape and appearance similarity. The method uses a particle filter where a multi-mode anisotropic mean shift is embedded to improve the initial particles. Comparing with the conventional particle filter and mean shift-based tracking (Shan et al. 2004), our method offers the following novelties: We employ a fully tunable rectangular bounding box described by five parameters (2D central location, width, height, and orientation) and full functionaries in the joint tracking scheme; We derive the equations for the multi-mode version of the anisotropic mean shift where the rectangular bounding box is partitioned into concentric areas, allowing better tracking objects with multiple modes. The bounding box parameters are then computed by using eigen-decomposition of mean shift estimates and weighted averaging. This enables a more efficient re-distributions of initial particles towards locations associated with large weights, hence an efficient particle filter tracking using a very small number of particles (N = 15 is used). Experiments have been conducted on video containing a range of complex scenarios, where tracking results are further evaluated by using two objective criteria and compared with two existing tracking methods. Our results have shown that the propose method is robust in terms of tracking drift, tightness and accuracy of tracked bounding boxes, especially in scenarios where the target object contains long-term partial occlusions, intersections, severe deformation, pose changes, or cluttered background with similar color distributions.

This paper proposes a new Bayesian online learning method on a Riemannian manifold for video obje... more This paper proposes a new Bayesian online learning method on a Riemannian manifold for video objects. The basic idea is to consider the dynamic appearance of an object as a point moving on a manifold, where a dual model is applied to estimate the posterior trajectory of this moving point at each time instant under the Bayesian framework. The dual model uses two state variables for modeling the online learning process on Riemannian manifolds: one is for object appearances on Riemannian manifolds, another is for velocity vectors in tangent planes of manifolds. The key difference of our method as compared with most existing Riemannian manifold tracking methods is to compute the Riemannian mean from a set of particle manifold points at each time instant rather than using a sliding window of manifold points at different times. Next to that, we propose to use Gabor filter outputs on partitioned sub-areas of object bounding box as features, from which the covariance matrix of object appearance is formed. As an application example, the proposed online learning is employed to a Riemannian manifold object tracking scheme where tracking and online learning are performed alternatively. Experiments are performed on both visual-band videos and infrared videos, and compared with two existing manifold trackers that are most relevant. Results have shown significant improvement in terms of tracking drift, tightness and accuracy of tracked boxes especially for objects with large pose changes.

This paper proposes a new Bayesian online learning method on a Riemannian manifold for video obje... more This paper proposes a new Bayesian online learning method on a Riemannian manifold for video objects. The basic idea is to consider the dynamic appearance of an object as a point moving on a manifold, where a dual model is applied to estimate the posterior trajectory of this moving point at each time instant under the Bayesian framework. The dual model uses two state variables for modeling the online learning process on Riemannian manifolds: one is for object appearances on Riemannian manifolds, another is for velocity vectors in tangent planes of manifolds. The key difference of our method as compared with most existing Riemannian manifold tracking methods is to compute the Riemannian mean from a set of particle manifold points at each time instant rather than using a sliding window of manifold points at different times. Next to that, we propose to use Gabor filter outputs on partitioned sub-areas of object bounding box as features, from which the covariance matrix of object appearance is formed. As an application example, the proposed online learning is employed to a Riemannian manifold object tracking scheme where tracking and online learning are performed alternatively. Experiments are performed on both visual-band videos and infrared videos, and compared with two existing manifold trackers that are most relevant. Results have shown significant improvement in terms of tracking drift, tightness and accuracy of tracked boxes especially for objects with large pose changes.
Uploads
Papers by Zulfiqar Hasan Khan