Microsoft researchers have issued a paper detailing potential improvements in the accuracy and speed of Kinect's sensor routines, although it is unclear if and when the findings will be implemented.
A summary of the paper, and link to the full text, appears on website I Programmer and primarily involves new AI learning routines that can be used to improve the device's ability to discern and track body parts.
Written by Microsoft Research in Cambridge, the labs experiments involve millions of 3D depth maps already identified with body parts. By aggregating the results a 1000 core server was able to create a series of data trees, which could then be used to assign a predicted body part to each piece of 3D pixel data.
The use of the aggregated data allows Kinect to estimate joint positions in a player's 3D model much more quickly than is currently the case. As well as removing the need for a formal calibration the new technology could also be used to reduce lag times from the currently relatively high levels.
The paper suggests that the new system can process a new user in less than 5 milliseconds, an order of magnitude faster than the current technology. Because of the large number of example depth maps the system is also more versatile across a wider range of body types.
The paper does not make it clear if and when the research will be used on retail Kinect sensor, but as a software solution it could potentially be released as a simple firmware update.