[visionlist] [visionlist] CfP:IEEE T-SMC:B special issue on Computer Vision for RGB-D Sensors: Kinect and Its Applications
jungong han
jungonghan77 at gmail.com
Mon Apr 23 08:37:19 GMT 2012
*Computer Vision for RGB-D Sensors: Kinect and Its Applications*
Special issue on IEEE Transactions on Systems, Man and Cybernetics - Part
B: Cybernectics
*Call for Paper:*
**
Depth cameras have been exploited in computer vision for several years, but
the high price and the poor quality of such devices have limited their
applicability. With the invention of the low-cost Microsoft Kinect sensor,
high-resolution depth and visual (RGB) sensing has become available for
widespread use as an off-the-shelf technology. The complementary nature* *of
the depth and visual (RGB) information in the Kinect sensor opens up new
opportunities to solve fundamental problems in computer vision, including
object and activity recognition, people tracking, 3D mapping and
localization, etc. For a long time, researchers have been challenged by
many problems such as detecting and identifying objects/humans in
real-world situations. Traditional object segmentation and tracking
algorithms based on RGB images are not always reliable when the environment
is cluttered or the illumination conditions suddenly change, both of which
occur frequently in a real-world setting. However, effectively combining
depth and RGB data may provide new solutions to these problems, where
object segmentation based on depth information is robust against
environmental changes, and the accuracy of object tracking/identification
can be improved by considering the depth, motion and appearance information
of an object.
Freely available SDKs and posture trackers for the Kinect modeling
environments further encourage new solutions to classic problems in
computer vision. Compared to conventional computer vision systems (based on
RGB images), systems using the Kinect sensor face a number of specific
challenges, including characterization of objects based on the RGB-Depth
images; correlation between per-pixel depth and RGB information when one of
them is missing or corrupted; and, semantic linkage and decision making
based on the fused information. Compared to stereo vision or ToF techniques
exploiting other depth sensors (i.e., Bumblebee camera or PMD camera), the
algorithms designed for the Kinect sensor need to solve additional
problems, though the overall depth sensing quality of the Kinect sensor is
much better than the other two. These particular problems embody the
intelligent computing of per-pixel depth from a noisy and sparse depth
point cloud; spatially calibrating and correlating the depth image with the
RGB images; data mining from the inhomogeneous depth map; and, designing
the illumination patterns for handling light interference effects.
This special issue is specifically dedicated to new algorithms and/or new
applications based on the Kinect (or similar RGB-D) sensors. The key
outcomes of the special issue will be a better understanding of: (1) the
contributions of this new sensor within the computer vision community, (2)
the possible applications of the Kinect sensor, and (3) the key challenges
and solutions for research in this domain. Topics of interest include, but
are not limited to:
· Object detection and recognition
· Segmentation and clustering
· Human pose estimation
· Human activity recognition and gesture recognition
· 3D scene reconstruction
· Human-computer interaction exploiting depth information
· Robotic vision based on Kinect
· Data mining based on RGB-D information
· Intelligent computing for generating dense depth map
· Decision making for fusing sensors
· Adaptive and learning techniques for a Kinect network (multi-Kinect)
· Transmission and visualization of 3D scenes
· 3D integration and understanding in multimedia applications
· Practical issues of deploying Kinect
· Social and ethical issues of Kinect sensing in public and private spaces
· Use of Kinect to acquire ground truth data in context-aware computing
· Industrial applications
Prospective authors should visit
http://www.ieeesmc.org/publications/index.html for information on paper
submission. Manuscripts should be submitted using the Manuscript Central
system at
http://mc.manuscriptcentral.com/smcb-ieee<https://webmail2007.tue.nl/owa/redir.aspx?C=0019be8a268147aaa8e1bf25a0ef1f54&URL=http%3a%2f%2fmc.manuscriptcentral.com%2fsmcb-ieee>.
Please choose “SI: Vision for Kinect” as the manuscript type. Manuscripts
will be peer reviewed according to the standard IEEE process.
*Important Dates:*
Submission of full papers 30 September 2012
Notification to authors 30 January 2013
Submission of revised papers 30 March 2013
Final decision on revised papers 30 May 2013
Tentative publication date Fourth quarter 2013
*Guest Editors:*
Ling Shao, The University of Sheffield, UK
Jungong Han, CWI, The Netherlands
Dong Xu, Nanyang Technological University, Singapore
Jamie Shotton, Microsoft Research Cambridge, UK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist/attachments/20120423/417da5c2/attachment.htm>
More information about the visionlist
mailing list