SlideShare uma empresa Scribd logo
1 de 20
STEFANO CARRINO<br />http://home.hefr.ch/carrinos/<br />PhD Student<br />2008-2011<br />Technologies Evaluation &<br />State of the Art<br />This document details technologies for gesture interpretation and analysis and proposes some parameters for a classification. The technologies proposed are <br /> TOC  quot;
1-3quot;
 Introduction PAGEREF _Toc217100831  3<br />Our vision, in brief PAGEREF _Toc217100832  3<br />Technologies Study PAGEREF _Toc217100833  3<br />State of the Art: papers PAGEREF _Toc217100834  3<br />Gesture recognition by computer vision PAGEREF _Toc217100835  3<br />Gesture Recognition by Accelerometers PAGEREF _Toc217100836  5<br />Technology PAGEREF _Toc217100837  7<br />Technology Evaluation PAGEREF _Toc217100838  8<br />Evaluation Criteria PAGEREF _Toc217100839  8<br />Technology Comparison PAGEREF _Toc217100840  8<br />Parameters’ weight PAGEREF _Toc217100841  8<br />Comparison PAGEREF _Toc217100842  10<br />Conclusions and Remarks PAGEREF _Toc217100843  11<br />Accelerometers, gloves and cameras… PAGEREF _Toc217100844  11<br />Proposition PAGEREF _Toc217100845  11<br />Divers PAGEREF _Toc217100846  12<br />Observation PAGEREF _Toc217100847  12<br />Some commonly features for gesture recognition by image analysis PAGEREF _Toc217100848  13<br />Gesture recognition or classification methods PAGEREF _Toc217100849  13<br />quot;
Gorilla armquot;
 PAGEREF _Toc217100850  14<br />References PAGEREF _Toc217100851  14<br />Attached PAGEREF _Toc217100852  16<br />Introduction<br />In the following sections we illustrate the state of the art in technologies for the acquisition of data for gesture recognition. After that we introduce some parameters for the evaluation of these approaches, motivating the weight of each parameter according to our vision. In the last section we highlight the conclusion of this research in the state of the art in this field.<br />Our vision, in brief<br />The AVATAR system will be composed by two elements:<br />The Smart Portable Device (SPD).<br />The Smart Environmental Device (SED).<br />The SPD has to provide the gesture interpretation for all the applications that are environment independent for what may concern the data acquisition (i.e. the cause and effect actions, inputs, computing machine and out put are all inside the SPD self).<br />The SED offers the gesture recognition where the SPD has not good performances. And, in addition, it could offer a layer for the connection of multiple SPD and the possibility of faster elaboration offering its computing power.<br />In this first step of our work we will focus the attention on the SPD but keeping in mind the future developments.<br />Technologies Study<br />The choice of the employed technologies (input) for the gesture interpretation is very in important in order to achieve good results in the gesture recognition. In the last years the evolution of technology and materials has pushed forward the feasibility and the robustness of this kind of systems; also more complex algorithms are now ready for this kind of applications (augmented speed in the computing processes, in mobile devices too, make the “real-time approach” reality).<br />State of the Art: papers<br />Follow a simple list of articles we have read, after the name is attached a short description.<br />Gesture recognition by computer vision<br />Arm-pointing Gesture Interface Using Surrounded Stereo Cameras System  REF _Ref216867245   [1]<br />- 2004<br />- Surrounding Stereo Cameras (four stereo cameras in four corners of the ceiling)<br />- Arm pointing<br />- Setting: 12 frame/s<br />- Recognition rate: 97.4% standing<br />- Recognition rate: 94% sitting posture<br />- The lighting environment had a slight influence<br />Improving Continuous Gesture Recognition with Spoken Prosody  REF _Ref216867261   [2]<br />- 2003<br />- Cameras and microphone<br />- HMM - Bayesian Network<br />- Gesture and Speech Synchronization<br />- 72.4% of 1876 gestures were classified correctly<br />Pointing Gesture Recognition based on 3DTracking of Face, Hands an Head Orientation  REF _Ref216867302   [3]<br />- 2003<br />- Stereo Camera (1)<br />- HMM<br />- 65% / 83% (without / with head orientation)<br />- 90% after user specific training<br />Real-time Gesture Recognition with Minimal Training Requirements and On-Line Learning  REF _Ref216867288   [4]<br />- 2007<br />- (SNM) HMMs modified for reduced training requirement<br />- Viterbi inference<br />- Optical, pressure, mouse/pen<br />- Result: ???<br />Recognition of Arm Gestures Using Multiple Orientation Sensors: gesture classification  REF _Ref216867331   [5]<br />- 2004<br />- IS-300 Pro Precision Motion Tracker by InterSense<br />- Results<br />Vision-Based Interfaces for Mobility  REF _Ref216867337   [6]<br />- 2004<br />- Head-worn camera<br />- AdaBoost<br />- (Larger than 30x20 pixels) runs with 10 frames per second on a 640x480 sized video stream on a 3GHz desktop computer.<br />- Interesting references<br />- 93.76% postures were classified correctly<br />GestureVR: Vision-Based 3D Hand interface for Spatial Interaction  REF _Ref216867359   [7]<br />- 1998<br />- 2 cameras 60Hz 3D space<br />- 3 gestures<br />- Finite state classification<br />Gesture Recognition by Accelerometers<br />Accelerometer Based Gesture Recognition for Real Time Applications<br />- Input: Accelerometer Bluetooth<br />- HMM<br />- Gesture Recognized Correctly 96%<br />- Reaction Time: 300ms<br />Accelerometer Based Real-Time Gesture Recognition  REF _Ref216867368   [8]<br />- Input: Sony-Ericsson W910i (3 axial accel.)<br />- 97.4% and 96% accuracy on a personalized gesture set<br />- HMM & SVM (Support Vector Machine)<br />- HMM (My algorithm was based on a recent Nokia Research Center paper [11] with some modifications. I have used the freely available JAHMM library for implementation.)<br />- Runtime was tested on a new generation MacBook computer with a dual core 2 GHz processor and 1 GB memory.<br />- Recognition time was independent from the number of teaching examples and averaged at 3.7ms for HMM and 0.4ms for SVM.<br />Self-Defined Gesture Recognition on Keyless Handheld Devices using MEMS 3D Accelerometer  REF _Ref216867392   [11]<br />- 2008<br />- Input: Three-dimensional MEMS accelerometer and a Single Chip Microcontroller<br />- 94% Arabic number recognition <br />Gesture-recognition with Non-referenced Tracking  REF _Ref216867430   [12]<br />- 2005-2006 (?)<br />- Accelerometer Bluetooth (MEMS) + gyroscopes<br />- 3motion™<br />- Particular algorithm for gesture recognition<br />- No numerical results<br />Real time gesture recognition using Continuous Time Recurrent Neural Networks  REF _Ref216867447   [13]<br />- 2007<br />- Accelerometers<br />- Continuous Time Recurrent Neural Networks (CTRNN)<br />- Neuro Fuzzy system (in a previously project)<br />- Isolated gesture: 98% was obtained for the training set and 94% for the testing set<br />- Realistic environment: 80.5% and 63.6 %<br />- Neuro fuzzy system can't work in dynamic (realistic situations)<br />- G. Bailador, G. Trivino, and S. Guadarrama. Gesture recognition using a neuro-fuzzy predictor. In International Conference of Artificial Intelligence and Soft Computing. Acta press, 2006.<br />ADL Classification Using Triaxial Accelerometers and RFID  REF _Ref216867468   [14]<br />- >2004<br />- ADL = Activities of Daily living<br />- 2 wireless (Zigbee homemade) accelerometers for 5 body states<br />- Glove type RFID reader<br />- 90% over 12 ADLs<br />Technology<br />The input devices used in the last years are:<br />Accelerometers<br />Wireless<br />Non wireless<br />Camera  REF _Ref216868035   [17]:<br />Depth-aware cameras. Using specialized cameras one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3d representation of what is being seen. These can be effective for detection of hand gestures due to their short-range capabilities. <br />Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. This method uses more traditional cameras, and thus does not hold the same distance issues as current depth-aware cameras. To get the cameras' relations, one can use a positioning reference such as a lexian-stripe (?) or infrared emitters. <br />Single camera. A normal camera can be used for gesture recognition where the resources/environment wouldn't be convenient for other forms of image-based recognition. Although not necessarily as effective as stereo or depth aware cameras, using a single camera allows a greater possibility of accessibility to a wider audience. <br />Angle Shape Sensor  REF _Ref216868069   [18]:<br />Exploiting the reflexion of the light inside optical fibre we are able to rebuild a 3D hand(s) model<br />Available also in wireless (Bluetooth), the present solutions (gloves) have to be connected with<br />Infrared technology.<br />Ultrasound / UWB (Ultra WideBand)<br />RFID<br />Gyroscopes (two angular-velocity sensors)<br />Controller-based gestures. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. Mouse gestures are one such example, where the motion of the mouse is correlated to a symbol being drawn by a person's hand, as is the Wii Remote, which can study changes in acceleration over time to represent gestures. <br />Technology Evaluation<br />Evaluation Criteria<br />In the following table there is a list of parameters of evaluation for the technologies presented in previous section.<br />Resolution: in relative amounts, resolution describes the degree to which a change can be detected. It is expressed as a fraction of an amount to which you can easily relate. For example, printer manufacturers often describe resolution as dots per inch, which is easier to relate to than dots per page.<br />Accuracy: accuracy describes the amount of uncertainty that exists in a measurement with respect to the relevant absolute standard. It can be defined in several different ways and is dependent on the specification philosophy of the supplier as well as product design. Most accuracy specifications include a gain and an offset parameter. <br />Latency: waiting time until the system firstly responses.<br />Range of motion.<br />User Comfort.<br />Cost. In economic terms.<br />Technology Comparison<br />Parameters’ weight<br />In this section we show how the weights in the previous table are chosen to characterize “my personal choice”.<br />First) Cost: we are in a research context so is not so important to value the cost of our system following a marketing approach. But I agree with the idea forwarded by H. Ford: “True progress is made only when the advantages of a new technology are within reach of everyonequot;
. For this reason the cost too appears as parameter in the table: a concept without possible future practical application is useless (to use gloves for hands modelling with a cost of 5000 $ or more are quite hard to see in a cheaper form in the future).<br />Second) User comfort: a technology completely invisible to the user will be ideal. In this perspective isn’t easy deal with the challenge “how to interface the user with the system”.  For example wondering about implementation of gesture recognition without any charge to the final user (gloves, camera, sensors…) is not a dream, but, in the other hand, the output and the feedback have to be presented to the user. From this viewpoint a head-mounted display (we are wondering about application in the context of the augmented reality) looks like the first natural solution. At this point adding camera to this device doesn’t make worse the situation with a huge advantage (and future possibilities):<br />Possible uncoupling from the environment (if enough computational power is provided to the user): all the technology is on the user. <br />In any case, if we need it, we can establish a network with other systems to gain more information and enrich our system.<br />We are able to enter in the domain of wearable/mobile systems. It is a challenge but it makes valuable and richer our system.<br />Third) Range of Motion: it is a direct consequence of the earlier point. With a wearable technology we can get rid of this problem; the range of motion is strictly related to the context and not dependents to our system. With other choices (e.g. cameras and sensors in the environment) the system will work in a specific environment and can lose in generality.<br />Fourth) Latency: to deal with this problem at this level is quite untimely. The latency depends on the used technology, the applied algorithms for gesture recognition and the tracking, but, potentially, also on other parameters such as the distance between input system, elaboration system and output/feedback system. (For example if the vector of information is the sound, the time of flight may be not negligible in a real-time system.)<br />Fifth) Accuracy & Resolution: first of all the system has to be reliable. Therefore these parameters are really meaningful in our application. As far as we are concerned we would like a tracking system able to discern correctly a little vocabulary of gestures and to make possible realistic interactions with three-dimensional virtual object in a three-dimensional mixed world.<br />Comparison<br />Analyzing input approach we have noticed two features:<br />Some of the equipments presented here are the direct evolution of the previous;<br />Nowadays some technologies are (of course in this domain) evidently inferior if compared with other technologies.<br />According to the first sentence we discard from further analysis wired accelerometer; they have not advantages compared to the wireless equivalent solution.<br />Depending on the second one we can exclude the RFID compared with the UWB.<br />In previous section we add “gyroscopes” like possible technology this isn’t completely correct; in reality this kind of technology have real applicability only if integrated with accelerometers or other sensors.<br />TechnologiesarametersResolution - AccuracyLatencyRange of motionUser ComfortCostRESULTSAccelerometers - wireless3452555Camera - singled camera2454453Camera - Stereo cameras32?3 (?)326+3*?Camera - depth-aware cameras44 (?)53360Angle shape sensor (gloves)44521 (-100)54Infrared technology4454463Ultrasound2????10+XWeight54321 <br />From this table we have evaluated two approaches as most interesting:<br />The infrared technology<br />The depth-aware camera.<br />In reality these two technologies are not uncorrelated. In deed the depth-aware cameras are often equipped with infrared emitters and receivers to calculate the position in the space of the object in the field of view of the camera  REF _Ref216868115   [19]. <br />Conclusions and Remarks<br />Chose a technology to implement our future work was not easy at all! Above all is that: the validity of a technology is strictly linked with its use. For example the results using a camera for gestures interpretation is strictly connected with the algorithms used to recognise the gestures. So it is impracticable to say THIS IS THE technology to use. Moreover there are others factors (as technical evolution) that we have to take into account.<br />Computer vision offers the user a less cumbersome interface, requiring of them only that they remain within the field of view of the camera or cameras. By deducing features and movement in real-time from the images captured from the cameras, gesture and posture recognition. Computer vision typically also requires good lighting conditions and the occlusion issue makes this solution application dependent.<br />Generally we can show there are two principal ways to tackle the issues tied to the gesture recognition:<br />- Computer Vision;<br />- Accelerometers (often coupled with gyroscopes or other sensors). <br />Each approach has advantages and disadvantages. In general researches show a percentage of gesture recognition above the 80% (often the 90%) within a restrict vocabulary.<br />However the evolution of new technology pushes these results toward higher level.<br />Accelerometers, gloves and cameras…<br />The scenarios we have thought about are in the context of augmented reality, for this reason, it is ordinary wondering about head-mounted display and to add a lightweight camera will not change drastically the user comfort; <br />Wireless technology provides us not so much cumbersome sensors but their integration on a human body is somewhat intrusive.<br />Gloves are another simple device not too much intrusive (in my opinion), but the cost to have a reliable mapping in a 3D space nowadays have a cost not negligible  REF _Ref216868069   [18].<br />However considering generalized scenarios and the most various types of gesture (body, arms, hands…) we don’t discard the idea to bring together more kind of sensors.<br />Proposition<br />What we propose for the next step is to think about scientific problems such user identification and multiuser management, context dependence (tracking), definition of model/language of gesture, and gesture recognition (acquisition and analyses).<br />All this fixing two goals for the future applications:<br />Usability.<br />That is:<br />Robustness;<br />Reliability.<br />That not is (at this moment):<br />Easy to wear (weight).<br />Augmented / virtual reality applicability:<br />Mobility;<br />3D gesture recognition capability;<br />Dynamic (and static?) gesture recognition.<br />As next steps I will define the following:<br />Work environment;<br />Definition of a framework for gesture modelling (???); <br />Acquisition technology selection;<br />Delve into state of the art for what concerns:<br />Gesture vocabulary definition<br />Action theory<br />Framework for gesture modelling<br />The choice of the kind of gesture model will be effectuated in the forecast of the following step: to extend gesture interpretation to the environment. In this perspective we will need also a strategy to add a tracking system to determine the user position coupled with the head position and orientation. This will be necessary if we want to be independent from visual marker or similar solutions.<br />Divers<br />Observation [13]:<br />Hidden Markov models, dynamic programming and neural networks have been investigated for gesture recognition with hidden Markov models being nowadays one of the predominant approach to classify sporadic gestures (e.g. classification of intentional gestures). Fuzzy systems expert has also been investigated for gesture recognition based on analyzing complex features of the signal like the Doppler spectrum. The disadvantage of these methods is that the classification is based on the separability of the features, therefore two different gestures with similar values for these features may be difficult to classify.<br />Some commonly features for gesture recognition by image analysis [6]:<br />Image moments.<br />Skin tone Blobs.<br />Coloured Markers.<br />Geometric Features.<br />Multiscale shape characterization.<br />Motion History Images and Motion Energy Images.<br />Shape Signatures.<br />Polygonal approximation-based Shape Descriptor.<br />Shape descriptors based upon regions and graphs.<br />Gesture recognition or classification methods  REF _Ref217113918   [16]<br />Following are the list of gesture recognition or classification methods proposed in the literature so far:<br />Hidden Markov Model (HMM).<br />Time Delay Neural Network (TDNN).<br />Elman Network.<br />Dynamic Time Warping (DTW).<br />Dynamic Programming.<br />Bayesian Classifier.<br />Multi-layer Perceptions.<br />Genetic Algorithm.<br />Fuzzy Inference Engine.<br />Template Matching.<br />Condensation Algorithm.<br />Radial Basis Functions.<br />Self-Organizing Map.<br />Binary Associative Machines.<br />Syntactic Pattern Recognition.<br />Decision Tree.<br />quot;
Gorilla armquot;
<br />quot;
Gorilla armquot;
 REF _Ref216868255   [21] was a side-effect that destroyed vertically-oriented touch-screens as a mainstream input technology despite a promising start in the early 1980s.<br />Designers of touch-menu systems failed to notice that humans aren't designed to hold their arms in front of their faces making small motions. After more than a very few selections, the arm begins to feel sore, cramped, and oversized -- the operator looks like a gorilla while using the touch screen and feels like one afterwards. This is now considered a classic cautionary tale to human-factors designers; quot;
Remember the gorilla arm!quot;
 is shorthand for quot;
How is this going to fly in real use?quot;
<br />Gorilla arm is not a problem for specialist short-term-use uses, since they only involve brief interactions which do not last long enough to cause gorilla arm.<br />References<br />Yamamoto, Y.; Yoda, I.; Sakaue, K.; Arm-pointing gesture interface using surrounded stereo cameras system, Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on Volume 4,  23-26 Aug. 2004 Page(s):965 - 970 Vol.4 <br />Kettebekov, S.; Yeasin, M.; Sharma, R.; Improving continuous gesture recognition with spoken prosody, Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference onVolume 1,  18-20 June 2003 Page(s):I-565 - I-570 vol.1<br />Kai Nickel , Rainer Stiefelhagen, Pointing gesture recognition based on 3D-tracking of face, hands and head orientation, Proceedings of the 5th international conference on Multimodal interfaces, November 05-07, 2003, Vancouver, British Columbia, Canada  <br />Rajko, S.; Gang Qian; Ingalls, T.; James, J.; Real-time Gesture Recognition with Minimal Training Requirements and On-line Learning, Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on 17-22 June 2007 Page(s):1 - 8 <br />Lementec, J.-C.; Bajcsy, P.; Recognition of arm gestures using multiple orientation sensors: gesture classification, Intelligent Transportation Systems, 2004. Proceedings. The 7th International IEEE Conference on 3-6 Oct. 2004 Page(s):965 - 970 <br />Kolsch, M.; Turk, M.; Hollerer, T.; Vision-based interfaces for mobility, Mobile and Ubiquitous Systems: Networking and Services, 2004. MOBIQUITOUS 2004. The First Annual International Conference on 22-26 Aug. 2004 Page(s):86 - 94 <br />Jakub Segen , Senthil Kumar, Gesture VR: vision-based 3D hand interface for spatial interaction, Proceedings of the sixth ACM international conference on Multimedia, p.455-464, September 13-16, 1998, Bristol, United Kingdom  <br />Beedkar ,K.; Shah, D.; Accelerometer Based Gesture Recognition for Real Time Applications, Real Time Systems, Project description; MS CS Georgia Institute of Technology<br /> Zoltán Prekopcsák, Péter Halácsy, and Csaba Gáspár-Papanek; Design and development of an everyday hand gesture interface in MobileHCI '08: Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. Amsterdam, the Netherlands, September 2008.<br />Zoltán Prekopcsák (2008) Accelerometer Based Real-Time Gesture Recognition in POSTER 2008: Proceedings of the 12th International Student Conference on Electrical Engineering. Prague, Czech Republic, May 2008.<br />Zhang, Shiqi; Yuan, Chun; Zhang, Yan; Self-Defined Gesture Recognition on Keyless Handheld Devices using MEMS 3D Accelerometer, Natural Computation, 2008. ICNC '08. Fourth International Conference on Volume 4,  18-20 Oct. 2008 Page(s):237 - 241 <br />Keir, P.; Payne, J.; Elgoyhen, J.; Horner, M.; Naef, M.; Anderson, P.; Gesture-recognition with Non-referenced Tracking, 3D User Interfaces, 2006. 3DUI 2006. IEEE Symposium on25-29 March 2006 Page(s):151 - 158 <br />G. Bailador, D. Roggen, G. Tröster, and G. Triviño. Real time gesture recognition using Continuous Time Recurrent Neural Networks. In 2nd Int. Conf. on Body Area Networks (BodyNets), 2007.<br />Im, Saemi; Kim, Ig-Jae; Ahn, Sang Chul; Kim, Hyoung-Gon; Automatic ADL classification using 3-axial accelerometers and RFID sensor; Multisensor Fusion and Integration for Intelligent Systems, 2008. MFI 2008. IEEE International Conference on 20-22 Aug. 2008 Page(s):697 - 702 <br />S. Mitra, T. Acharya; Gesture Recognition- A Survey, Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on 2007<br />Hafiz Adnan Habib. Gesture Recognition Based intelligent Algorithms for Virtual keyboard Development. A thesis submitted in partial fulfilment for the degree of Doctor of Philosophy.<br />http://en.wikipedia.org/wiki/Gesture_recognition<br />HYPERLINK quot;
http://www.5dt.com/quot;
http://www.5dt.com/see the attached documentation.<br />HYPERLINK quot;
http://www.3dvsystems.com/quot;
http://www.3dvsystems.com/ see the attached documentation.<br />http://en.wikipedia.org/wiki/Touchscreen<br />Attached INCLUDEPICTURE  quot;
http://www.5dt.com/textures/sidetop.jpgquot;
  MERGEFORMATINET <br />5DT Data Glove 5 UltraProduct Description The 5DT Data Glove 5 Ultra is designed to satisfy the stringent requirements of modernMotion Capture and Animation Professionals. It offers comfort, ease of use, a small form factorand multiple application drivers. The high data quality, low cross-correlation and high data ratemake it ideal for realistic realtime animation.The 5DT Data Glove 5 Ultra measures finger flexure (1 sensor per finger) of the user's hand. The system interfaces with the computer via a USB cable. A Serial Port (RS 232 - platform independent) option is availible through the 5DT Data Glove Ultra Serial Interface Kit. It features 8-bit flexure resolution, extreme comfort, low drift and an open architecture. The 5DT Data Glove Ultra Wireless Kit interfaces with the computer via Bluetooth technology (up to 20m distance) for high speed connectivity for up to 8 hours on a single battery. Right- and left-handed models are available. One size fits many (stretch lycra). Features  Advanced Sensor Technology  Wide Application Support  Affordable quality  Extreme comfort  One size fits many Automatic calibration - minimum 8-bit flexture resolution Platform independant - USB Or Serial interface (RS 232) Cross-platform SDK  Bundled software  High update rate On-board processor Low crosstalk between fingers Wireless version available (5DT Ultra Wireless Kit) Quick quot;
hot releasequot;
 connectionRelated Products 5DT Data Glove 14 Ultra5DT Data Glove 5 MRI (For Magnetic Resonance Imaging Applications)5DT Data Glove 16 MRI (For Magnetic Resonance Imaging Applications)5DT Wireless Kit Ultra5DT Serial Interface Kit Data SheetsData sheets must be viewed with a PDF-viewer. If you do not have a PDF-viewer, you can download Adobe Acrobat Reader from Adobe's site at http://www.adobe.com/products/acrobat/readstep.html.  5DT Data Glove Series Data Sheet: 5DTDataGloveUltraDatasheet.pdf (124 KB) Manuals Manuals must be viewed with a PDF-viewer. If you do not have a PDF-viewer, you can download Adobe Acrobat Reader from Adobe's site at http://www.adobe.com/products/acrobat/readstep.html.  5DT Data Glove 5 Manual: 5DT Data Glove Ultra - Manual.pdf (2,168 KB) Glove SDK Windows and Linux SDK (free):The current version of the windows SDK is 2.0 and Linux 1.04a. The driver works for all versions of the 5DT Data Glove Series. Please refer to the driver manual for instructions on how to install and use it. Windows users will need a program that can open ZIP files, such as WinZip, from www.winzip.com. For Linux, use the quot;
unzipquot;
 command.  Windows 95/98/NT/2000 SDK: GloveSDK_2.0.zip (212 KB) Linux SDK: 5DTDataGloveDriver1_04a.zip (89.0 KB) The following files contains all the SDK, manuals, glove software and data sheets for the 5DT Data Glove Series:  Windows 95/98/NT/2000: GloveSetup_Win2.2.exe (13.4 MB) Linux: 5DTDataGloveSeriesLinux1_02.zip (1.21 MB ) Unix Driver:The 5DT Data Glove Ultra Driver for Unix provides access to the 5DT range of data gloves at an intermediate level. The driver functionality includes multiple instances, easy initialization and shutdown, basic (raw) sensor values, scaled (auto-calibrated) sensor values, calibration functions, basic gesture recognition and a cross-platform Application Programming Interface (API). The driver utilizes Posix threads. Pricing for this driver is shown below. Go to our Downloads page for more drivers, data sheets, software and manuals.PricingPRODUCT NAMEPRODUCT DESCRIPTIONPRICE5DT Glove 5 Ultra Right-handed5 Sensor Data Glove: Right-handedUS$9955DT Glove 5 Ultra Left-handed5 Sensor Data Glove: Left-handedUS$995Accessories  5DT Ultra Wireless KitKit allows for 2 Gloves in one compact packageUS$1,4955DT Data Glove Serial KitSerial Interface Kit US$195Drivers & Software   Alias | Kaydara MOCAP Driver  US$4953D Studio Max 6.0 Driver  US$295Maya Driver  US$295SoftImage XSI Driver  US$295UNIX SDK* Please Note Serial Only (No USB Drivers)US$495<br />ZCamTM3D video cameras by 3DVSince it was established 3DV Systems has developed 4 generations of depth cameras. Its primary focus in developing new products throughout the years has been to reduce their cost and size, so that the unique state-of-the-art technology will be affordable and meet the needs of consumers as well as of these of multiple industries.        In recent years 3DV has been developing DeepCTM, a chipset that embodies the company's core depth sensing technology. This chipset can be fitted to work in any camera for any application, so that partners (e.g. OEMs) can use their own know-how, market reach and supply chain in the design and manufacturing of the overall camera capabilities. The chipset will be available for sale soon.The new ZCamTM (previously Z-Sense), 3DV's most recently completed prototype camera, is based on DeepCTM and is the company's smallest and most cost-effective 3D camera. At the size of a standard webcam and at affordable cost, it provides very accurate depth information at high speed (60 frames per second) and high depth resolution (1-2 cm). At the same time, it provides synchronized and synthesized quality colour (RGB) video (at 1.3 M-Pixel). With these specifications, the new ZCamTM (previously Z-Sense) is ideal for PC-based gaming and for background replacement in web-conferencing. Game developers, web-conferencing service providers and gaming enthusiasts interested in the new ZCamTM (previously Z-Sense) are invited to contact us. As previously mentioned, the new ZCamTM (previously Z-Sense) and DeepCTM are the latest achievements backed by a tradition of providing high quality depth sensing products. Z-CamTM, the first depth video camera, was released in 2000 and was targeted primarily at broadcasting organizations. Z-MiniTM and DMC-100TM followed, each representing another leap forward in reducing cost and size.               <br />
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO
STEFANO CARRINO

Mais conteúdo relacionado

Mais procurados

IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...
IRJET-  	  Convenience Improvement for Graphical Interface using Gesture Dete...IRJET-  	  Convenience Improvement for Graphical Interface using Gesture Dete...
IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...IRJET Journal
 
Surveillance using Video Analytics
Surveillance using Video AnalyticsSurveillance using Video Analytics
Surveillance using Video Analyticsidescitation
 
ASIS Poster - Final
ASIS Poster - FinalASIS Poster - Final
ASIS Poster - FinalJacob Jose
 
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...IRJET Journal
 
Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062
Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062
Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062Wael Alawsey
 
Camera Encoded Phased Array for Semi-Automated Inspection of Complex Composit...
Camera Encoded Phased Array for Semi-Automated Inspection of Complex Composit...Camera Encoded Phased Array for Semi-Automated Inspection of Complex Composit...
Camera Encoded Phased Array for Semi-Automated Inspection of Complex Composit...Innerspec Technologies
 
OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...
OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...
OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...ijcseit
 
Unit 3 machine vision
Unit 3 machine vision Unit 3 machine vision
Unit 3 machine vision rknatarajan
 
Reading System for the Blind PPT
Reading System for the Blind PPTReading System for the Blind PPT
Reading System for the Blind PPTBinayak Ghosh
 
IRJET - Gesture Controlled Home Automation using CNN
IRJET -  	  Gesture Controlled Home Automation using CNNIRJET -  	  Gesture Controlled Home Automation using CNN
IRJET - Gesture Controlled Home Automation using CNNIRJET Journal
 
Mems Sensor Based Approach for Gesture Recognition to Control Media in Computer
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerMems Sensor Based Approach for Gesture Recognition to Control Media in Computer
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerIJARIIT
 
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...IRJET Journal
 
Design of wheelchair using finger operation with image processing algorithms
Design of wheelchair using finger operation with image processing algorithmsDesign of wheelchair using finger operation with image processing algorithms
Design of wheelchair using finger operation with image processing algorithmseSAT Publishing House
 
Design of wheelchair using finger operation with image processing algorithms
Design of wheelchair using finger operation with image processing algorithmsDesign of wheelchair using finger operation with image processing algorithms
Design of wheelchair using finger operation with image processing algorithmseSAT Journals
 
Smart Bank Locker Access System Using Iris ,Fingerprints,Face Recognization A...
Smart Bank Locker Access System Using Iris ,Fingerprints,Face Recognization A...Smart Bank Locker Access System Using Iris ,Fingerprints,Face Recognization A...
Smart Bank Locker Access System Using Iris ,Fingerprints,Face Recognization A...IJERA Editor
 
IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...
IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...
IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...IRJET Journal
 
final report-4
final report-4final report-4
final report-4Zhuo Li
 
final presentation from William, Amy and Alex
final presentation from William, Amy and Alexfinal presentation from William, Amy and Alex
final presentation from William, Amy and AlexZiwei Zhu
 

Mais procurados (19)

IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...
IRJET-  	  Convenience Improvement for Graphical Interface using Gesture Dete...IRJET-  	  Convenience Improvement for Graphical Interface using Gesture Dete...
IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...
 
Surveillance using Video Analytics
Surveillance using Video AnalyticsSurveillance using Video Analytics
Surveillance using Video Analytics
 
ASIS Poster - Final
ASIS Poster - FinalASIS Poster - Final
ASIS Poster - Final
 
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
 
Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062
Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062
Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062
 
Camera Encoded Phased Array for Semi-Automated Inspection of Complex Composit...
Camera Encoded Phased Array for Semi-Automated Inspection of Complex Composit...Camera Encoded Phased Array for Semi-Automated Inspection of Complex Composit...
Camera Encoded Phased Array for Semi-Automated Inspection of Complex Composit...
 
OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...
OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...
OBSERVATIONAL DISCRETE LINES FOR THE DETECTION OF MOVING VEHICLES IN ROAD TRA...
 
Unit 3 machine vision
Unit 3 machine vision Unit 3 machine vision
Unit 3 machine vision
 
Reading System for the Blind PPT
Reading System for the Blind PPTReading System for the Blind PPT
Reading System for the Blind PPT
 
IRJET - Gesture Controlled Home Automation using CNN
IRJET -  	  Gesture Controlled Home Automation using CNNIRJET -  	  Gesture Controlled Home Automation using CNN
IRJET - Gesture Controlled Home Automation using CNN
 
Mems Sensor Based Approach for Gesture Recognition to Control Media in Computer
Mems Sensor Based Approach for Gesture Recognition to Control Media in ComputerMems Sensor Based Approach for Gesture Recognition to Control Media in Computer
Mems Sensor Based Approach for Gesture Recognition to Control Media in Computer
 
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
IRJET= Air Writing: Gesture Recognition using Ultrasound Sensors and Grid-Eye...
 
Design of wheelchair using finger operation with image processing algorithms
Design of wheelchair using finger operation with image processing algorithmsDesign of wheelchair using finger operation with image processing algorithms
Design of wheelchair using finger operation with image processing algorithms
 
Design of wheelchair using finger operation with image processing algorithms
Design of wheelchair using finger operation with image processing algorithmsDesign of wheelchair using finger operation with image processing algorithms
Design of wheelchair using finger operation with image processing algorithms
 
Project_Report_Masters
Project_Report_MastersProject_Report_Masters
Project_Report_Masters
 
Smart Bank Locker Access System Using Iris ,Fingerprints,Face Recognization A...
Smart Bank Locker Access System Using Iris ,Fingerprints,Face Recognization A...Smart Bank Locker Access System Using Iris ,Fingerprints,Face Recognization A...
Smart Bank Locker Access System Using Iris ,Fingerprints,Face Recognization A...
 
IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...
IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...
IRJET- A Review Paper on Object Detection using Zynq-7000 FPGA for an Embedde...
 
final report-4
final report-4final report-4
final report-4
 
final presentation from William, Amy and Alex
final presentation from William, Amy and Alexfinal presentation from William, Amy and Alex
final presentation from William, Amy and Alex
 

Destaque

Applying Support Vector Learning to Stem Cells Classification
Applying Support Vector Learning to Stem Cells ClassificationApplying Support Vector Learning to Stem Cells Classification
Applying Support Vector Learning to Stem Cells Classificationbutest
 
2007bai7604.doc.doc
2007bai7604.doc.doc2007bai7604.doc.doc
2007bai7604.doc.docbutest
 
Resume(short)
Resume(short)Resume(short)
Resume(short)butest
 
Motivated Machine Learning for Water Resource Management
Motivated Machine Learning for Water Resource ManagementMotivated Machine Learning for Water Resource Management
Motivated Machine Learning for Water Resource Managementbutest
 
LECTURE8.PPT
LECTURE8.PPTLECTURE8.PPT
LECTURE8.PPTbutest
 
SHFpublicReportfinal_WP2.doc
SHFpublicReportfinal_WP2.docSHFpublicReportfinal_WP2.doc
SHFpublicReportfinal_WP2.docbutest
 
KM.doc
KM.docKM.doc
KM.docbutest
 
Word accessible - .:: NIB | National Industries for the Blind ::.
Word accessible - .:: NIB | National Industries for the Blind ::.Word accessible - .:: NIB | National Industries for the Blind ::.
Word accessible - .:: NIB | National Industries for the Blind ::.butest
 
Introduction
IntroductionIntroduction
Introductionbutest
 

Destaque (9)

Applying Support Vector Learning to Stem Cells Classification
Applying Support Vector Learning to Stem Cells ClassificationApplying Support Vector Learning to Stem Cells Classification
Applying Support Vector Learning to Stem Cells Classification
 
2007bai7604.doc.doc
2007bai7604.doc.doc2007bai7604.doc.doc
2007bai7604.doc.doc
 
Resume(short)
Resume(short)Resume(short)
Resume(short)
 
Motivated Machine Learning for Water Resource Management
Motivated Machine Learning for Water Resource ManagementMotivated Machine Learning for Water Resource Management
Motivated Machine Learning for Water Resource Management
 
LECTURE8.PPT
LECTURE8.PPTLECTURE8.PPT
LECTURE8.PPT
 
SHFpublicReportfinal_WP2.doc
SHFpublicReportfinal_WP2.docSHFpublicReportfinal_WP2.doc
SHFpublicReportfinal_WP2.doc
 
KM.doc
KM.docKM.doc
KM.doc
 
Word accessible - .:: NIB | National Industries for the Blind ::.
Word accessible - .:: NIB | National Industries for the Blind ::.Word accessible - .:: NIB | National Industries for the Blind ::.
Word accessible - .:: NIB | National Industries for the Blind ::.
 
Introduction
IntroductionIntroduction
Introduction
 

Semelhante a STEFANO CARRINO

IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET Journal
 
IRJET - A Smart Assistant for Aiding Dumb People
IRJET - A Smart Assistant for Aiding Dumb PeopleIRJET - A Smart Assistant for Aiding Dumb People
IRJET - A Smart Assistant for Aiding Dumb PeopleIRJET Journal
 
Human Activity Recognition Using Smartphone
Human Activity Recognition Using SmartphoneHuman Activity Recognition Using Smartphone
Human Activity Recognition Using SmartphoneIRJET Journal
 
A Digital Pen with a Trajectory Recognition Algorithm
A Digital Pen with a Trajectory Recognition AlgorithmA Digital Pen with a Trajectory Recognition Algorithm
A Digital Pen with a Trajectory Recognition AlgorithmIOSR Journals
 
A Digital Pen with a Trajectory Recognition Algorithm
A Digital Pen with a Trajectory Recognition AlgorithmA Digital Pen with a Trajectory Recognition Algorithm
A Digital Pen with a Trajectory Recognition AlgorithmIOSR Journals
 
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...IJERA Editor
 
Traffic Sign Recognition using CNNs
Traffic Sign Recognition using CNNsTraffic Sign Recognition using CNNs
Traffic Sign Recognition using CNNsIRJET Journal
 
SLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNING
SLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNINGSLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNING
SLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNINGIRJET Journal
 
IRJET- Survey Paper on Vision based Hand Gesture Recognition
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET- Survey Paper on Vision based Hand Gesture Recognition
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET Journal
 
The International Journal of Engineering and Science
The International Journal of Engineering and ScienceThe International Journal of Engineering and Science
The International Journal of Engineering and Sciencetheijes
 
The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)theijes
 
Controlling Computer using Hand Gestures
Controlling Computer using Hand GesturesControlling Computer using Hand Gestures
Controlling Computer using Hand GesturesIRJET Journal
 
IRJET - Efficient Approach for Number Plaque Accreditation System using W...
IRJET -  	  Efficient Approach for Number Plaque Accreditation System using W...IRJET -  	  Efficient Approach for Number Plaque Accreditation System using W...
IRJET - Efficient Approach for Number Plaque Accreditation System using W...IRJET Journal
 
HAND GESTURE RECOGNITION.ppt (1).pptx
HAND GESTURE RECOGNITION.ppt (1).pptxHAND GESTURE RECOGNITION.ppt (1).pptx
HAND GESTURE RECOGNITION.ppt (1).pptxDeepakkumaragrahari1
 
Arduino Based Hand Gesture Controlled Robot
Arduino Based Hand Gesture Controlled RobotArduino Based Hand Gesture Controlled Robot
Arduino Based Hand Gesture Controlled RobotIRJET Journal
 
Gesture Recognition System
Gesture Recognition SystemGesture Recognition System
Gesture Recognition SystemIRJET Journal
 
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...IRJET Journal
 

Semelhante a STEFANO CARRINO (20)

IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language Interpreter
 
IRJET - A Smart Assistant for Aiding Dumb People
IRJET - A Smart Assistant for Aiding Dumb PeopleIRJET - A Smart Assistant for Aiding Dumb People
IRJET - A Smart Assistant for Aiding Dumb People
 
Human Activity Recognition Using Smartphone
Human Activity Recognition Using SmartphoneHuman Activity Recognition Using Smartphone
Human Activity Recognition Using Smartphone
 
A Digital Pen with a Trajectory Recognition Algorithm
A Digital Pen with a Trajectory Recognition AlgorithmA Digital Pen with a Trajectory Recognition Algorithm
A Digital Pen with a Trajectory Recognition Algorithm
 
A Digital Pen with a Trajectory Recognition Algorithm
A Digital Pen with a Trajectory Recognition AlgorithmA Digital Pen with a Trajectory Recognition Algorithm
A Digital Pen with a Trajectory Recognition Algorithm
 
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
Hand Motion Gestures For Mobile Communication Based On Inertial Sensors For O...
 
E010122431
E010122431E010122431
E010122431
 
Traffic Sign Recognition using CNNs
Traffic Sign Recognition using CNNsTraffic Sign Recognition using CNNs
Traffic Sign Recognition using CNNs
 
SLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNING
SLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNINGSLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNING
SLIDE PRESENTATION BY HAND GESTURE RECOGNITION USING MACHINE LEARNING
 
IRJET- Survey Paper on Vision based Hand Gesture Recognition
IRJET- Survey Paper on Vision based Hand Gesture RecognitionIRJET- Survey Paper on Vision based Hand Gesture Recognition
IRJET- Survey Paper on Vision based Hand Gesture Recognition
 
1886 1892
1886 18921886 1892
1886 1892
 
1886 1892
1886 18921886 1892
1886 1892
 
The International Journal of Engineering and Science
The International Journal of Engineering and ScienceThe International Journal of Engineering and Science
The International Journal of Engineering and Science
 
The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)
 
Controlling Computer using Hand Gestures
Controlling Computer using Hand GesturesControlling Computer using Hand Gestures
Controlling Computer using Hand Gestures
 
IRJET - Efficient Approach for Number Plaque Accreditation System using W...
IRJET -  	  Efficient Approach for Number Plaque Accreditation System using W...IRJET -  	  Efficient Approach for Number Plaque Accreditation System using W...
IRJET - Efficient Approach for Number Plaque Accreditation System using W...
 
HAND GESTURE RECOGNITION.ppt (1).pptx
HAND GESTURE RECOGNITION.ppt (1).pptxHAND GESTURE RECOGNITION.ppt (1).pptx
HAND GESTURE RECOGNITION.ppt (1).pptx
 
Arduino Based Hand Gesture Controlled Robot
Arduino Based Hand Gesture Controlled RobotArduino Based Hand Gesture Controlled Robot
Arduino Based Hand Gesture Controlled Robot
 
Gesture Recognition System
Gesture Recognition SystemGesture Recognition System
Gesture Recognition System
 
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
 

Mais de butest

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEbutest
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALbutest
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jacksonbutest
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...butest
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALbutest
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer IIbutest
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazzbutest
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.docbutest
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1butest
 
Facebook
Facebook Facebook
Facebook butest
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...butest
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...butest
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTbutest
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docbutest
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docbutest
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.docbutest
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!butest
 

Mais de butest (20)

EL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBEEL MODELO DE NEGOCIO DE YOUTUBE
EL MODELO DE NEGOCIO DE YOUTUBE
 
1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同1. MPEG I.B.P frame之不同
1. MPEG I.B.P frame之不同
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Timeline: The Life of Michael Jackson
Timeline: The Life of Michael JacksonTimeline: The Life of Michael Jackson
Timeline: The Life of Michael Jackson
 
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
 
LESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIALLESSONS FROM THE MICHAEL JACKSON TRIAL
LESSONS FROM THE MICHAEL JACKSON TRIAL
 
Com 380, Summer II
Com 380, Summer IICom 380, Summer II
Com 380, Summer II
 
PPT
PPTPPT
PPT
 
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet JazzThe MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
 
MICHAEL JACKSON.doc
MICHAEL JACKSON.docMICHAEL JACKSON.doc
MICHAEL JACKSON.doc
 
Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1Social Networks: Twitter Facebook SL - Slide 1
Social Networks: Twitter Facebook SL - Slide 1
 
Facebook
Facebook Facebook
Facebook
 
Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...Executive Summary Hare Chevrolet is a General Motors dealership ...
Executive Summary Hare Chevrolet is a General Motors dealership ...
 
Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...Welcome to the Dougherty County Public Library's Facebook and ...
Welcome to the Dougherty County Public Library's Facebook and ...
 
NEWS ANNOUNCEMENT
NEWS ANNOUNCEMENTNEWS ANNOUNCEMENT
NEWS ANNOUNCEMENT
 
C-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.docC-2100 Ultra Zoom.doc
C-2100 Ultra Zoom.doc
 
MAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.docMAC Printing on ITS Printers.doc.doc
MAC Printing on ITS Printers.doc.doc
 
Mac OS X Guide.doc
Mac OS X Guide.docMac OS X Guide.doc
Mac OS X Guide.doc
 
hier
hierhier
hier
 
WEB DESIGN!
WEB DESIGN!WEB DESIGN!
WEB DESIGN!
 

STEFANO CARRINO

  • 1. STEFANO CARRINO<br />http://home.hefr.ch/carrinos/<br />PhD Student<br />2008-2011<br />Technologies Evaluation &<br />State of the Art<br />This document details technologies for gesture interpretation and analysis and proposes some parameters for a classification. The technologies proposed are <br /> TOC quot; 1-3quot; Introduction PAGEREF _Toc217100831 3<br />Our vision, in brief PAGEREF _Toc217100832 3<br />Technologies Study PAGEREF _Toc217100833 3<br />State of the Art: papers PAGEREF _Toc217100834 3<br />Gesture recognition by computer vision PAGEREF _Toc217100835 3<br />Gesture Recognition by Accelerometers PAGEREF _Toc217100836 5<br />Technology PAGEREF _Toc217100837 7<br />Technology Evaluation PAGEREF _Toc217100838 8<br />Evaluation Criteria PAGEREF _Toc217100839 8<br />Technology Comparison PAGEREF _Toc217100840 8<br />Parameters’ weight PAGEREF _Toc217100841 8<br />Comparison PAGEREF _Toc217100842 10<br />Conclusions and Remarks PAGEREF _Toc217100843 11<br />Accelerometers, gloves and cameras… PAGEREF _Toc217100844 11<br />Proposition PAGEREF _Toc217100845 11<br />Divers PAGEREF _Toc217100846 12<br />Observation PAGEREF _Toc217100847 12<br />Some commonly features for gesture recognition by image analysis PAGEREF _Toc217100848 13<br />Gesture recognition or classification methods PAGEREF _Toc217100849 13<br />quot; Gorilla armquot; PAGEREF _Toc217100850 14<br />References PAGEREF _Toc217100851 14<br />Attached PAGEREF _Toc217100852 16<br />Introduction<br />In the following sections we illustrate the state of the art in technologies for the acquisition of data for gesture recognition. After that we introduce some parameters for the evaluation of these approaches, motivating the weight of each parameter according to our vision. In the last section we highlight the conclusion of this research in the state of the art in this field.<br />Our vision, in brief<br />The AVATAR system will be composed by two elements:<br />The Smart Portable Device (SPD).<br />The Smart Environmental Device (SED).<br />The SPD has to provide the gesture interpretation for all the applications that are environment independent for what may concern the data acquisition (i.e. the cause and effect actions, inputs, computing machine and out put are all inside the SPD self).<br />The SED offers the gesture recognition where the SPD has not good performances. And, in addition, it could offer a layer for the connection of multiple SPD and the possibility of faster elaboration offering its computing power.<br />In this first step of our work we will focus the attention on the SPD but keeping in mind the future developments.<br />Technologies Study<br />The choice of the employed technologies (input) for the gesture interpretation is very in important in order to achieve good results in the gesture recognition. In the last years the evolution of technology and materials has pushed forward the feasibility and the robustness of this kind of systems; also more complex algorithms are now ready for this kind of applications (augmented speed in the computing processes, in mobile devices too, make the “real-time approach” reality).<br />State of the Art: papers<br />Follow a simple list of articles we have read, after the name is attached a short description.<br />Gesture recognition by computer vision<br />Arm-pointing Gesture Interface Using Surrounded Stereo Cameras System REF _Ref216867245 [1]<br />- 2004<br />- Surrounding Stereo Cameras (four stereo cameras in four corners of the ceiling)<br />- Arm pointing<br />- Setting: 12 frame/s<br />- Recognition rate: 97.4% standing<br />- Recognition rate: 94% sitting posture<br />- The lighting environment had a slight influence<br />Improving Continuous Gesture Recognition with Spoken Prosody REF _Ref216867261 [2]<br />- 2003<br />- Cameras and microphone<br />- HMM - Bayesian Network<br />- Gesture and Speech Synchronization<br />- 72.4% of 1876 gestures were classified correctly<br />Pointing Gesture Recognition based on 3DTracking of Face, Hands an Head Orientation REF _Ref216867302 [3]<br />- 2003<br />- Stereo Camera (1)<br />- HMM<br />- 65% / 83% (without / with head orientation)<br />- 90% after user specific training<br />Real-time Gesture Recognition with Minimal Training Requirements and On-Line Learning REF _Ref216867288 [4]<br />- 2007<br />- (SNM) HMMs modified for reduced training requirement<br />- Viterbi inference<br />- Optical, pressure, mouse/pen<br />- Result: ???<br />Recognition of Arm Gestures Using Multiple Orientation Sensors: gesture classification REF _Ref216867331 [5]<br />- 2004<br />- IS-300 Pro Precision Motion Tracker by InterSense<br />- Results<br />Vision-Based Interfaces for Mobility REF _Ref216867337 [6]<br />- 2004<br />- Head-worn camera<br />- AdaBoost<br />- (Larger than 30x20 pixels) runs with 10 frames per second on a 640x480 sized video stream on a 3GHz desktop computer.<br />- Interesting references<br />- 93.76% postures were classified correctly<br />GestureVR: Vision-Based 3D Hand interface for Spatial Interaction REF _Ref216867359 [7]<br />- 1998<br />- 2 cameras 60Hz 3D space<br />- 3 gestures<br />- Finite state classification<br />Gesture Recognition by Accelerometers<br />Accelerometer Based Gesture Recognition for Real Time Applications<br />- Input: Accelerometer Bluetooth<br />- HMM<br />- Gesture Recognized Correctly 96%<br />- Reaction Time: 300ms<br />Accelerometer Based Real-Time Gesture Recognition REF _Ref216867368 [8]<br />- Input: Sony-Ericsson W910i (3 axial accel.)<br />- 97.4% and 96% accuracy on a personalized gesture set<br />- HMM & SVM (Support Vector Machine)<br />- HMM (My algorithm was based on a recent Nokia Research Center paper [11] with some modifications. I have used the freely available JAHMM library for implementation.)<br />- Runtime was tested on a new generation MacBook computer with a dual core 2 GHz processor and 1 GB memory.<br />- Recognition time was independent from the number of teaching examples and averaged at 3.7ms for HMM and 0.4ms for SVM.<br />Self-Defined Gesture Recognition on Keyless Handheld Devices using MEMS 3D Accelerometer REF _Ref216867392 [11]<br />- 2008<br />- Input: Three-dimensional MEMS accelerometer and a Single Chip Microcontroller<br />- 94% Arabic number recognition <br />Gesture-recognition with Non-referenced Tracking REF _Ref216867430 [12]<br />- 2005-2006 (?)<br />- Accelerometer Bluetooth (MEMS) + gyroscopes<br />- 3motion™<br />- Particular algorithm for gesture recognition<br />- No numerical results<br />Real time gesture recognition using Continuous Time Recurrent Neural Networks REF _Ref216867447 [13]<br />- 2007<br />- Accelerometers<br />- Continuous Time Recurrent Neural Networks (CTRNN)<br />- Neuro Fuzzy system (in a previously project)<br />- Isolated gesture: 98% was obtained for the training set and 94% for the testing set<br />- Realistic environment: 80.5% and 63.6 %<br />- Neuro fuzzy system can't work in dynamic (realistic situations)<br />- G. Bailador, G. Trivino, and S. Guadarrama. Gesture recognition using a neuro-fuzzy predictor. In International Conference of Artificial Intelligence and Soft Computing. Acta press, 2006.<br />ADL Classification Using Triaxial Accelerometers and RFID REF _Ref216867468 [14]<br />- >2004<br />- ADL = Activities of Daily living<br />- 2 wireless (Zigbee homemade) accelerometers for 5 body states<br />- Glove type RFID reader<br />- 90% over 12 ADLs<br />Technology<br />The input devices used in the last years are:<br />Accelerometers<br />Wireless<br />Non wireless<br />Camera REF _Ref216868035 [17]:<br />Depth-aware cameras. Using specialized cameras one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3d representation of what is being seen. These can be effective for detection of hand gestures due to their short-range capabilities. <br />Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. This method uses more traditional cameras, and thus does not hold the same distance issues as current depth-aware cameras. To get the cameras' relations, one can use a positioning reference such as a lexian-stripe (?) or infrared emitters. <br />Single camera. A normal camera can be used for gesture recognition where the resources/environment wouldn't be convenient for other forms of image-based recognition. Although not necessarily as effective as stereo or depth aware cameras, using a single camera allows a greater possibility of accessibility to a wider audience. <br />Angle Shape Sensor REF _Ref216868069 [18]:<br />Exploiting the reflexion of the light inside optical fibre we are able to rebuild a 3D hand(s) model<br />Available also in wireless (Bluetooth), the present solutions (gloves) have to be connected with<br />Infrared technology.<br />Ultrasound / UWB (Ultra WideBand)<br />RFID<br />Gyroscopes (two angular-velocity sensors)<br />Controller-based gestures. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. Mouse gestures are one such example, where the motion of the mouse is correlated to a symbol being drawn by a person's hand, as is the Wii Remote, which can study changes in acceleration over time to represent gestures. <br />Technology Evaluation<br />Evaluation Criteria<br />In the following table there is a list of parameters of evaluation for the technologies presented in previous section.<br />Resolution: in relative amounts, resolution describes the degree to which a change can be detected. It is expressed as a fraction of an amount to which you can easily relate. For example, printer manufacturers often describe resolution as dots per inch, which is easier to relate to than dots per page.<br />Accuracy: accuracy describes the amount of uncertainty that exists in a measurement with respect to the relevant absolute standard. It can be defined in several different ways and is dependent on the specification philosophy of the supplier as well as product design. Most accuracy specifications include a gain and an offset parameter. <br />Latency: waiting time until the system firstly responses.<br />Range of motion.<br />User Comfort.<br />Cost. In economic terms.<br />Technology Comparison<br />Parameters’ weight<br />In this section we show how the weights in the previous table are chosen to characterize “my personal choice”.<br />First) Cost: we are in a research context so is not so important to value the cost of our system following a marketing approach. But I agree with the idea forwarded by H. Ford: “True progress is made only when the advantages of a new technology are within reach of everyonequot; . For this reason the cost too appears as parameter in the table: a concept without possible future practical application is useless (to use gloves for hands modelling with a cost of 5000 $ or more are quite hard to see in a cheaper form in the future).<br />Second) User comfort: a technology completely invisible to the user will be ideal. In this perspective isn’t easy deal with the challenge “how to interface the user with the system”. For example wondering about implementation of gesture recognition without any charge to the final user (gloves, camera, sensors…) is not a dream, but, in the other hand, the output and the feedback have to be presented to the user. From this viewpoint a head-mounted display (we are wondering about application in the context of the augmented reality) looks like the first natural solution. At this point adding camera to this device doesn’t make worse the situation with a huge advantage (and future possibilities):<br />Possible uncoupling from the environment (if enough computational power is provided to the user): all the technology is on the user. <br />In any case, if we need it, we can establish a network with other systems to gain more information and enrich our system.<br />We are able to enter in the domain of wearable/mobile systems. It is a challenge but it makes valuable and richer our system.<br />Third) Range of Motion: it is a direct consequence of the earlier point. With a wearable technology we can get rid of this problem; the range of motion is strictly related to the context and not dependents to our system. With other choices (e.g. cameras and sensors in the environment) the system will work in a specific environment and can lose in generality.<br />Fourth) Latency: to deal with this problem at this level is quite untimely. The latency depends on the used technology, the applied algorithms for gesture recognition and the tracking, but, potentially, also on other parameters such as the distance between input system, elaboration system and output/feedback system. (For example if the vector of information is the sound, the time of flight may be not negligible in a real-time system.)<br />Fifth) Accuracy & Resolution: first of all the system has to be reliable. Therefore these parameters are really meaningful in our application. As far as we are concerned we would like a tracking system able to discern correctly a little vocabulary of gestures and to make possible realistic interactions with three-dimensional virtual object in a three-dimensional mixed world.<br />Comparison<br />Analyzing input approach we have noticed two features:<br />Some of the equipments presented here are the direct evolution of the previous;<br />Nowadays some technologies are (of course in this domain) evidently inferior if compared with other technologies.<br />According to the first sentence we discard from further analysis wired accelerometer; they have not advantages compared to the wireless equivalent solution.<br />Depending on the second one we can exclude the RFID compared with the UWB.<br />In previous section we add “gyroscopes” like possible technology this isn’t completely correct; in reality this kind of technology have real applicability only if integrated with accelerometers or other sensors.<br />TechnologiesarametersResolution - AccuracyLatencyRange of motionUser ComfortCostRESULTSAccelerometers - wireless3452555Camera - singled camera2454453Camera - Stereo cameras32?3 (?)326+3*?Camera - depth-aware cameras44 (?)53360Angle shape sensor (gloves)44521 (-100)54Infrared technology4454463Ultrasound2????10+XWeight54321 <br />From this table we have evaluated two approaches as most interesting:<br />The infrared technology<br />The depth-aware camera.<br />In reality these two technologies are not uncorrelated. In deed the depth-aware cameras are often equipped with infrared emitters and receivers to calculate the position in the space of the object in the field of view of the camera REF _Ref216868115 [19]. <br />Conclusions and Remarks<br />Chose a technology to implement our future work was not easy at all! Above all is that: the validity of a technology is strictly linked with its use. For example the results using a camera for gestures interpretation is strictly connected with the algorithms used to recognise the gestures. So it is impracticable to say THIS IS THE technology to use. Moreover there are others factors (as technical evolution) that we have to take into account.<br />Computer vision offers the user a less cumbersome interface, requiring of them only that they remain within the field of view of the camera or cameras. By deducing features and movement in real-time from the images captured from the cameras, gesture and posture recognition. Computer vision typically also requires good lighting conditions and the occlusion issue makes this solution application dependent.<br />Generally we can show there are two principal ways to tackle the issues tied to the gesture recognition:<br />- Computer Vision;<br />- Accelerometers (often coupled with gyroscopes or other sensors). <br />Each approach has advantages and disadvantages. In general researches show a percentage of gesture recognition above the 80% (often the 90%) within a restrict vocabulary.<br />However the evolution of new technology pushes these results toward higher level.<br />Accelerometers, gloves and cameras…<br />The scenarios we have thought about are in the context of augmented reality, for this reason, it is ordinary wondering about head-mounted display and to add a lightweight camera will not change drastically the user comfort; <br />Wireless technology provides us not so much cumbersome sensors but their integration on a human body is somewhat intrusive.<br />Gloves are another simple device not too much intrusive (in my opinion), but the cost to have a reliable mapping in a 3D space nowadays have a cost not negligible REF _Ref216868069 [18].<br />However considering generalized scenarios and the most various types of gesture (body, arms, hands…) we don’t discard the idea to bring together more kind of sensors.<br />Proposition<br />What we propose for the next step is to think about scientific problems such user identification and multiuser management, context dependence (tracking), definition of model/language of gesture, and gesture recognition (acquisition and analyses).<br />All this fixing two goals for the future applications:<br />Usability.<br />That is:<br />Robustness;<br />Reliability.<br />That not is (at this moment):<br />Easy to wear (weight).<br />Augmented / virtual reality applicability:<br />Mobility;<br />3D gesture recognition capability;<br />Dynamic (and static?) gesture recognition.<br />As next steps I will define the following:<br />Work environment;<br />Definition of a framework for gesture modelling (???); <br />Acquisition technology selection;<br />Delve into state of the art for what concerns:<br />Gesture vocabulary definition<br />Action theory<br />Framework for gesture modelling<br />The choice of the kind of gesture model will be effectuated in the forecast of the following step: to extend gesture interpretation to the environment. In this perspective we will need also a strategy to add a tracking system to determine the user position coupled with the head position and orientation. This will be necessary if we want to be independent from visual marker or similar solutions.<br />Divers<br />Observation [13]:<br />Hidden Markov models, dynamic programming and neural networks have been investigated for gesture recognition with hidden Markov models being nowadays one of the predominant approach to classify sporadic gestures (e.g. classification of intentional gestures). Fuzzy systems expert has also been investigated for gesture recognition based on analyzing complex features of the signal like the Doppler spectrum. The disadvantage of these methods is that the classification is based on the separability of the features, therefore two different gestures with similar values for these features may be difficult to classify.<br />Some commonly features for gesture recognition by image analysis [6]:<br />Image moments.<br />Skin tone Blobs.<br />Coloured Markers.<br />Geometric Features.<br />Multiscale shape characterization.<br />Motion History Images and Motion Energy Images.<br />Shape Signatures.<br />Polygonal approximation-based Shape Descriptor.<br />Shape descriptors based upon regions and graphs.<br />Gesture recognition or classification methods REF _Ref217113918 [16]<br />Following are the list of gesture recognition or classification methods proposed in the literature so far:<br />Hidden Markov Model (HMM).<br />Time Delay Neural Network (TDNN).<br />Elman Network.<br />Dynamic Time Warping (DTW).<br />Dynamic Programming.<br />Bayesian Classifier.<br />Multi-layer Perceptions.<br />Genetic Algorithm.<br />Fuzzy Inference Engine.<br />Template Matching.<br />Condensation Algorithm.<br />Radial Basis Functions.<br />Self-Organizing Map.<br />Binary Associative Machines.<br />Syntactic Pattern Recognition.<br />Decision Tree.<br />quot; Gorilla armquot; <br />quot; Gorilla armquot; REF _Ref216868255 [21] was a side-effect that destroyed vertically-oriented touch-screens as a mainstream input technology despite a promising start in the early 1980s.<br />Designers of touch-menu systems failed to notice that humans aren't designed to hold their arms in front of their faces making small motions. After more than a very few selections, the arm begins to feel sore, cramped, and oversized -- the operator looks like a gorilla while using the touch screen and feels like one afterwards. This is now considered a classic cautionary tale to human-factors designers; quot; Remember the gorilla arm!quot; is shorthand for quot; How is this going to fly in real use?quot; <br />Gorilla arm is not a problem for specialist short-term-use uses, since they only involve brief interactions which do not last long enough to cause gorilla arm.<br />References<br />Yamamoto, Y.; Yoda, I.; Sakaue, K.; Arm-pointing gesture interface using surrounded stereo cameras system, Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on Volume 4, 23-26 Aug. 2004 Page(s):965 - 970 Vol.4 <br />Kettebekov, S.; Yeasin, M.; Sharma, R.; Improving continuous gesture recognition with spoken prosody, Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference onVolume 1,  18-20 June 2003 Page(s):I-565 - I-570 vol.1<br />Kai Nickel , Rainer Stiefelhagen, Pointing gesture recognition based on 3D-tracking of face, hands and head orientation, Proceedings of the 5th international conference on Multimodal interfaces, November 05-07, 2003, Vancouver, British Columbia, Canada <br />Rajko, S.; Gang Qian; Ingalls, T.; James, J.; Real-time Gesture Recognition with Minimal Training Requirements and On-line Learning, Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on 17-22 June 2007 Page(s):1 - 8 <br />Lementec, J.-C.; Bajcsy, P.; Recognition of arm gestures using multiple orientation sensors: gesture classification, Intelligent Transportation Systems, 2004. Proceedings. The 7th International IEEE Conference on 3-6 Oct. 2004 Page(s):965 - 970 <br />Kolsch, M.; Turk, M.; Hollerer, T.; Vision-based interfaces for mobility, Mobile and Ubiquitous Systems: Networking and Services, 2004. MOBIQUITOUS 2004. The First Annual International Conference on 22-26 Aug. 2004 Page(s):86 - 94 <br />Jakub Segen , Senthil Kumar, Gesture VR: vision-based 3D hand interface for spatial interaction, Proceedings of the sixth ACM international conference on Multimedia, p.455-464, September 13-16, 1998, Bristol, United Kingdom <br />Beedkar ,K.; Shah, D.; Accelerometer Based Gesture Recognition for Real Time Applications, Real Time Systems, Project description; MS CS Georgia Institute of Technology<br /> Zoltán Prekopcsák, Péter Halácsy, and Csaba Gáspár-Papanek; Design and development of an everyday hand gesture interface in MobileHCI '08: Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. Amsterdam, the Netherlands, September 2008.<br />Zoltán Prekopcsák (2008) Accelerometer Based Real-Time Gesture Recognition in POSTER 2008: Proceedings of the 12th International Student Conference on Electrical Engineering. Prague, Czech Republic, May 2008.<br />Zhang, Shiqi; Yuan, Chun; Zhang, Yan; Self-Defined Gesture Recognition on Keyless Handheld Devices using MEMS 3D Accelerometer, Natural Computation, 2008. ICNC '08. Fourth International Conference on Volume 4,  18-20 Oct. 2008 Page(s):237 - 241 <br />Keir, P.; Payne, J.; Elgoyhen, J.; Horner, M.; Naef, M.; Anderson, P.; Gesture-recognition with Non-referenced Tracking, 3D User Interfaces, 2006. 3DUI 2006. IEEE Symposium on25-29 March 2006 Page(s):151 - 158 <br />G. Bailador, D. Roggen, G. Tröster, and G. Triviño. Real time gesture recognition using Continuous Time Recurrent Neural Networks. In 2nd Int. Conf. on Body Area Networks (BodyNets), 2007.<br />Im, Saemi; Kim, Ig-Jae; Ahn, Sang Chul; Kim, Hyoung-Gon; Automatic ADL classification using 3-axial accelerometers and RFID sensor; Multisensor Fusion and Integration for Intelligent Systems, 2008. MFI 2008. IEEE International Conference on 20-22 Aug. 2008 Page(s):697 - 702 <br />S. Mitra, T. Acharya; Gesture Recognition- A Survey, Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on 2007<br />Hafiz Adnan Habib. Gesture Recognition Based intelligent Algorithms for Virtual keyboard Development. A thesis submitted in partial fulfilment for the degree of Doctor of Philosophy.<br />http://en.wikipedia.org/wiki/Gesture_recognition<br />HYPERLINK quot; http://www.5dt.com/quot; http://www.5dt.com/see the attached documentation.<br />HYPERLINK quot; http://www.3dvsystems.com/quot; http://www.3dvsystems.com/ see the attached documentation.<br />http://en.wikipedia.org/wiki/Touchscreen<br />Attached INCLUDEPICTURE quot; http://www.5dt.com/textures/sidetop.jpgquot; MERGEFORMATINET <br />5DT Data Glove 5 UltraProduct Description The 5DT Data Glove 5 Ultra is designed to satisfy the stringent requirements of modernMotion Capture and Animation Professionals. It offers comfort, ease of use, a small form factorand multiple application drivers. The high data quality, low cross-correlation and high data ratemake it ideal for realistic realtime animation.The 5DT Data Glove 5 Ultra measures finger flexure (1 sensor per finger) of the user's hand. The system interfaces with the computer via a USB cable. A Serial Port (RS 232 - platform independent) option is availible through the 5DT Data Glove Ultra Serial Interface Kit. It features 8-bit flexure resolution, extreme comfort, low drift and an open architecture. The 5DT Data Glove Ultra Wireless Kit interfaces with the computer via Bluetooth technology (up to 20m distance) for high speed connectivity for up to 8 hours on a single battery. Right- and left-handed models are available. One size fits many (stretch lycra). Features  Advanced Sensor Technology  Wide Application Support  Affordable quality  Extreme comfort  One size fits many Automatic calibration - minimum 8-bit flexture resolution Platform independant - USB Or Serial interface (RS 232) Cross-platform SDK  Bundled software  High update rate On-board processor Low crosstalk between fingers Wireless version available (5DT Ultra Wireless Kit) Quick quot; hot releasequot; connectionRelated Products 5DT Data Glove 14 Ultra5DT Data Glove 5 MRI (For Magnetic Resonance Imaging Applications)5DT Data Glove 16 MRI (For Magnetic Resonance Imaging Applications)5DT Wireless Kit Ultra5DT Serial Interface Kit Data SheetsData sheets must be viewed with a PDF-viewer. If you do not have a PDF-viewer, you can download Adobe Acrobat Reader from Adobe's site at http://www.adobe.com/products/acrobat/readstep.html.  5DT Data Glove Series Data Sheet: 5DTDataGloveUltraDatasheet.pdf (124 KB) Manuals Manuals must be viewed with a PDF-viewer. If you do not have a PDF-viewer, you can download Adobe Acrobat Reader from Adobe's site at http://www.adobe.com/products/acrobat/readstep.html.  5DT Data Glove 5 Manual: 5DT Data Glove Ultra - Manual.pdf (2,168 KB) Glove SDK Windows and Linux SDK (free):The current version of the windows SDK is 2.0 and Linux 1.04a. The driver works for all versions of the 5DT Data Glove Series. Please refer to the driver manual for instructions on how to install and use it. Windows users will need a program that can open ZIP files, such as WinZip, from www.winzip.com. For Linux, use the quot; unzipquot; command.  Windows 95/98/NT/2000 SDK: GloveSDK_2.0.zip (212 KB) Linux SDK: 5DTDataGloveDriver1_04a.zip (89.0 KB) The following files contains all the SDK, manuals, glove software and data sheets for the 5DT Data Glove Series:  Windows 95/98/NT/2000: GloveSetup_Win2.2.exe (13.4 MB) Linux: 5DTDataGloveSeriesLinux1_02.zip (1.21 MB ) Unix Driver:The 5DT Data Glove Ultra Driver for Unix provides access to the 5DT range of data gloves at an intermediate level. The driver functionality includes multiple instances, easy initialization and shutdown, basic (raw) sensor values, scaled (auto-calibrated) sensor values, calibration functions, basic gesture recognition and a cross-platform Application Programming Interface (API). The driver utilizes Posix threads. Pricing for this driver is shown below. Go to our Downloads page for more drivers, data sheets, software and manuals.PricingPRODUCT NAMEPRODUCT DESCRIPTIONPRICE5DT Glove 5 Ultra Right-handed5 Sensor Data Glove: Right-handedUS$9955DT Glove 5 Ultra Left-handed5 Sensor Data Glove: Left-handedUS$995Accessories  5DT Ultra Wireless KitKit allows for 2 Gloves in one compact packageUS$1,4955DT Data Glove Serial KitSerial Interface Kit US$195Drivers & Software   Alias | Kaydara MOCAP Driver  US$4953D Studio Max 6.0 Driver  US$295Maya Driver  US$295SoftImage XSI Driver  US$295UNIX SDK* Please Note Serial Only (No USB Drivers)US$495<br />ZCamTM3D video cameras by 3DVSince it was established 3DV Systems has developed 4 generations of depth cameras. Its primary focus in developing new products throughout the years has been to reduce their cost and size, so that the unique state-of-the-art technology will be affordable and meet the needs of consumers as well as of these of multiple industries.        In recent years 3DV has been developing DeepCTM, a chipset that embodies the company's core depth sensing technology. This chipset can be fitted to work in any camera for any application, so that partners (e.g. OEMs) can use their own know-how, market reach and supply chain in the design and manufacturing of the overall camera capabilities. The chipset will be available for sale soon.The new ZCamTM (previously Z-Sense), 3DV's most recently completed prototype camera, is based on DeepCTM and is the company's smallest and most cost-effective 3D camera. At the size of a standard webcam and at affordable cost, it provides very accurate depth information at high speed (60 frames per second) and high depth resolution (1-2 cm). At the same time, it provides synchronized and synthesized quality colour (RGB) video (at 1.3 M-Pixel). With these specifications, the new ZCamTM (previously Z-Sense) is ideal for PC-based gaming and for background replacement in web-conferencing. Game developers, web-conferencing service providers and gaming enthusiasts interested in the new ZCamTM (previously Z-Sense) are invited to contact us. As previously mentioned, the new ZCamTM (previously Z-Sense) and DeepCTM are the latest achievements backed by a tradition of providing high quality depth sensing products. Z-CamTM, the first depth video camera, was released in 2000 and was targeted primarily at broadcasting organizations. Z-MiniTM and DMC-100TM followed, each representing another leap forward in reducing cost and size.               <br />