SlideShare a Scribd company logo
1 of 1
Download to read offline
Towards a Smart Control Room for Crisis Response
                  Using Visual Perception of Users
                 Joris Ijsselmuiden, Florian van de Camp, Alexander Schick, Michael Voit, Rainer Stiefelhagen
                                                             {iss, ca, sci, vt, stiefe}@iitb.fraunhofer.de
                                                                   Fraunhofer IITB, Karlsruhe
INTRODUCTION
   Due to ever increasing challenges and complexity, there is a high demand for
   new human-machine interaction approaches in crisis response scenarios. We
   aim at building a smart crisis response control room, in which vision-based
   perception of users will be used to facilitate innovative user interfaces and to
   support teamwork. Our control room is equipped with several cameras and has a
   videowall as the main interaction device. Using real-time computer vision, we can
   track and identify the users in the room, and estimate their head orientations and
   pointing gestures. In the near future, the room will also be equiped with speech
   recognition. In order to build a useful smart control room for crisis response, we
   are currently focusing on situation modeling for such rooms and we are
   investigating the target crisis response scenarios.




                                                                                                  Person tracking, head pose estimation, and gesture recognition

                                                                                        SYSTEM ARCHITECTURE
                                                                                        • All components run in parallel and in real time
                                                                                        • We use several computers, with multithreading and GPU programming to obtain
                                                                                          sufficient computational power
                                                                                        • Our custom-built middleware takes care of network communication
       Our smart control room laboratory, containing videowall and cameras              • A centralized situation model (blackboard) is kept, describing the situation in the
                                                                                          room and the objects and users in it
GOALS
                                                                                        • All perceptual components can read and write in this situation model and a logic
• Develop new ways of interacting with computers and support interaction between
                                                                                          engine uses it to deduce higher level facts about the situation [6]
  humans using: tracking, identification, head pose, gestures, speech, and
  situation/user modeling [1,2]                                                         • In the near future, our control room laboratory will be extended using some of the
                                                                                          following: speech recognition, standard workstations, a digital situation table [7],
• Conduct user studies to find multimodal system setups that improve computer
                                                                                          tablet PCs, sound, and synthesized speech
  supported cooperative work in a crisis response control room
• Improve expressive power, ease of use, intuitiveness, speed, reliability,
  adaptability, and cooperation while reducing physical and mental workload
• Create intelligent, context dependant user interfaces through situation modeling
  and user modeling
• Challenges in crisis response control rooms include: team-based operation, limits
  to mental workload, high cost of failure, time pressure, dense/complex
  information, and the user acceptance problem

PERCEPTION
                                                                                           Example interaction with the videowall and a digital situation table in operation
• Tracking and identification [3]
• Head pose and visual focus of attention [4]                                           This work is supported by the FhG Internal Programs under Grant No. 692 026
• Gestures and bodypose [5]                                                             (Fraunhofer Attract). It is a collaboration between the Fraunhofer Institute for
• Speech recognition (future work)                                                      Information and Data Processing; Business Unit Interactive Analysis and Diagnosis
                                                                                        and the University of Karlsruhe (TH); Faculty of Computer Science, in the
                                                                                        framework of the five-year Fraunhofer internal project “Computer Vision for Human-
                                                                                        Computer Interaction – Interaction in and with attentive rooms”.

                                                                                        REFERENCES
                                                                                        1. Project Webpage (2009) www.iitb.fraunhofer.de/?20718
                                                                                        2. Stiefelhagen, Bernardin, Ekenel, Voit (2008) Tracking Identities and Attention in
                                                                                           Smart Environments - Contributions and Progress in the CHIL Project IEEE
 A camera image and its corresponding segmentation and 3D voxel representation             International Conference on Face and Gesture Recognition
                                                                                        3. Bernardin, Van de Camp, Stiefelhagen (2007) Automatic Person Detection and
INTERACTION                                                                                Tracking using Fuzzy Controlled Active Cameras IEEE International Conference
1. Identities are obtained through face recognition (in operation)                         on Computer Vision and Pattern Recognition
2. User models are used to generate personal user interfaces, obeying the user’s        4. Voit, Stiefelhagen (2008) Deducing the Visual Focus of Attention from Head Pose
   preferences, current tasks, and specialized knowledge (future work)                     Estimation in Dynamic Multi-view Meeting Scenarios 10th International
3. Using person tracking, interfaces are displayed close to the user (in operation)        Conference on Multimodal Interfaces
4. Objects on the videowall are manipulated using pointing gestures and directing       5. Nickel, Stiefelhagen (2007) Visual Recognition of Pointing Gestures for Human-
   ones visual attention (in operation)                                                    Robot Interaction Image and Vision Computing
5. This can be combined with speech recognition and a range of different hand           6. Brdiczka, Crowley, Curín, Kleindienst (2009) Chapter: Situation Modeling, in
   gestures (future work)                                                                  Waibel, Stiefelhagen (Eds.) Computers in the Human Interaction Loop
6. Head pose is employed to analyze the interaction of the team, for example to         7. Bader, Meissner, Tschnerney (2008) Digital Map Table with Fovea-Tablett®:
   determine who has been talking to whom (in operation)                                   Smart Furniture for Emergency Operation Centers 5th International Conference
                                                                                           on Information Systems for Crisis Response and Management
7. User-specific information can be displayed on the videowall, at the user’s current
   focus of attention and we can make people aware of what they haven’t seen yet
   (future work)

More Related Content

What's hot

AI Therapist – Emotion Detection using Facial Detection and Recognition and S...
AI Therapist – Emotion Detection using Facial Detection and Recognition and S...AI Therapist – Emotion Detection using Facial Detection and Recognition and S...
AI Therapist – Emotion Detection using Facial Detection and Recognition and S...
ijtsrd
 
Tangible 3 D Hand Gesture
Tangible 3 D Hand GestureTangible 3 D Hand Gesture
Tangible 3 D Hand Gesture
JongHyoun
 
Engelman.2011.exploring interaction modes for image retrieval
Engelman.2011.exploring interaction modes for image retrievalEngelman.2011.exploring interaction modes for image retrieval
Engelman.2011.exploring interaction modes for image retrieval
mrgazer
 

What's hot (12)

AI Therapist – Emotion Detection using Facial Detection and Recognition and S...
AI Therapist – Emotion Detection using Facial Detection and Recognition and S...AI Therapist – Emotion Detection using Facial Detection and Recognition and S...
AI Therapist – Emotion Detection using Facial Detection and Recognition and S...
 
Tangible 3 D Hand Gesture
Tangible 3 D Hand GestureTangible 3 D Hand Gesture
Tangible 3 D Hand Gesture
 
My old 2002 Thesis on Hand Gesture Recognition using a Web Cam! 
My old 2002 Thesis on Hand Gesture Recognition using a Web Cam! My old 2002 Thesis on Hand Gesture Recognition using a Web Cam! 
My old 2002 Thesis on Hand Gesture Recognition using a Web Cam! 
 
The technologies of ai used in different corporate world
The technologies of ai used in different  corporate worldThe technologies of ai used in different  corporate world
The technologies of ai used in different corporate world
 
The Role of Semantic Web Technologies in Smart Environments
The Role of Semantic Web Technologies in Smart EnvironmentsThe Role of Semantic Web Technologies in Smart Environments
The Role of Semantic Web Technologies in Smart Environments
 
IRJET- Facial Emotion Detection using Convolutional Neural Network
IRJET- Facial Emotion Detection using Convolutional Neural NetworkIRJET- Facial Emotion Detection using Convolutional Neural Network
IRJET- Facial Emotion Detection using Convolutional Neural Network
 
Pdf4
Pdf4Pdf4
Pdf4
 
Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...
 
Engelman.2011.exploring interaction modes for image retrieval
Engelman.2011.exploring interaction modes for image retrievalEngelman.2011.exploring interaction modes for image retrieval
Engelman.2011.exploring interaction modes for image retrieval
 
200905 - Sociable machines
200905 - Sociable machines200905 - Sociable machines
200905 - Sociable machines
 
Image Restoration for 3D Computer Vision
Image Restoration for 3D Computer VisionImage Restoration for 3D Computer Vision
Image Restoration for 3D Computer Vision
 
Niknewppt
NiknewpptNiknewppt
Niknewppt
 

Similar to Towards a Smart Control Room for Crisis Response Using Visual Perception of Users

People Monitoring and Mask Detection using Real-time video analyzing
People Monitoring and Mask Detection using Real-time video analyzingPeople Monitoring and Mask Detection using Real-time video analyzing
People Monitoring and Mask Detection using Real-time video analyzing
vivatechijri
 
SIXTH SENSE TECHNOLOGY REPORT
SIXTH SENSE TECHNOLOGY REPORTSIXTH SENSE TECHNOLOGY REPORT
SIXTH SENSE TECHNOLOGY REPORT
JISMI JACOB
 
Virtual reality 611 ims_ noida
Virtual reality 611 ims_ noidaVirtual reality 611 ims_ noida
Virtual reality 611 ims_ noida
Kool Hunk
 
Claudio Sapateiro ISCRAM 2009 Poster Session
Claudio Sapateiro ISCRAM 2009 Poster SessionClaudio Sapateiro ISCRAM 2009 Poster Session
Claudio Sapateiro ISCRAM 2009 Poster Session
ClaudioSapateiro
 

Similar to Towards a Smart Control Room for Crisis Response Using Visual Perception of Users (20)

Controlling Computer using Hand Gestures
Controlling Computer using Hand GesturesControlling Computer using Hand Gestures
Controlling Computer using Hand Gestures
 
A Survey Paper on Controlling Computer using Hand Gestures
A Survey Paper on Controlling Computer using Hand GesturesA Survey Paper on Controlling Computer using Hand Gestures
A Survey Paper on Controlling Computer using Hand Gestures
 
Synopsis of Facial Emotion Recognition to Emoji Conversion
Synopsis of Facial Emotion Recognition to Emoji ConversionSynopsis of Facial Emotion Recognition to Emoji Conversion
Synopsis of Facial Emotion Recognition to Emoji Conversion
 
Interaction Paradigms
Interaction ParadigmsInteraction Paradigms
Interaction Paradigms
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language Interpreter
 
People Monitoring and Mask Detection using Real-time video analyzing
People Monitoring and Mask Detection using Real-time video analyzingPeople Monitoring and Mask Detection using Real-time video analyzing
People Monitoring and Mask Detection using Real-time video analyzing
 
MOUSE SIMULATION USING NON MAXIMUM SUPPRESSION
MOUSE SIMULATION USING NON MAXIMUM SUPPRESSIONMOUSE SIMULATION USING NON MAXIMUM SUPPRESSION
MOUSE SIMULATION USING NON MAXIMUM SUPPRESSION
 
GRAS
GRASGRAS
GRAS
 
SIXTH SENSE TECHNOLOGY REPORT
SIXTH SENSE TECHNOLOGY REPORTSIXTH SENSE TECHNOLOGY REPORT
SIXTH SENSE TECHNOLOGY REPORT
 
HGR-thesis
HGR-thesisHGR-thesis
HGR-thesis
 
Smart Presentation Control by Hand Gestures Using Computer Vision and Google’...
Smart Presentation Control by Hand Gestures Using Computer Vision and Google’...Smart Presentation Control by Hand Gestures Using Computer Vision and Google’...
Smart Presentation Control by Hand Gestures Using Computer Vision and Google’...
 
virtual mouse using hand gesture.pptx
virtual mouse using hand gesture.pptxvirtual mouse using hand gesture.pptx
virtual mouse using hand gesture.pptx
 
Sign Language Recognition using Machine Learning
Sign Language Recognition using Machine LearningSign Language Recognition using Machine Learning
Sign Language Recognition using Machine Learning
 
Hand Gesture Recognition System Using Holistic Mediapipe
Hand Gesture Recognition System Using Holistic MediapipeHand Gesture Recognition System Using Holistic Mediapipe
Hand Gesture Recognition System Using Holistic Mediapipe
 
Virtual Reality
Virtual RealityVirtual Reality
Virtual Reality
 
Virtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand GesturesVirtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand Gestures
 
Virtual reality 611 ims_ noida
Virtual reality 611 ims_ noidaVirtual reality 611 ims_ noida
Virtual reality 611 ims_ noida
 
IRJET- Sixth Sense Technology in Image Processing
IRJET-  	  Sixth Sense Technology in Image ProcessingIRJET-  	  Sixth Sense Technology in Image Processing
IRJET- Sixth Sense Technology in Image Processing
 
Claudio Sapateiro ISCRAM 2009 Poster Session
Claudio Sapateiro ISCRAM 2009 Poster SessionClaudio Sapateiro ISCRAM 2009 Poster Session
Claudio Sapateiro ISCRAM 2009 Poster Session
 
IRJET- A Vision based Hand Gesture Recognition System using Convolutional...
IRJET-  	  A Vision based Hand Gesture Recognition System using Convolutional...IRJET-  	  A Vision based Hand Gesture Recognition System using Convolutional...
IRJET- A Vision based Hand Gesture Recognition System using Convolutional...
 

Recently uploaded

Recently uploaded (20)

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 

Towards a Smart Control Room for Crisis Response Using Visual Perception of Users

  • 1. Towards a Smart Control Room for Crisis Response Using Visual Perception of Users Joris Ijsselmuiden, Florian van de Camp, Alexander Schick, Michael Voit, Rainer Stiefelhagen {iss, ca, sci, vt, stiefe}@iitb.fraunhofer.de Fraunhofer IITB, Karlsruhe INTRODUCTION Due to ever increasing challenges and complexity, there is a high demand for new human-machine interaction approaches in crisis response scenarios. We aim at building a smart crisis response control room, in which vision-based perception of users will be used to facilitate innovative user interfaces and to support teamwork. Our control room is equipped with several cameras and has a videowall as the main interaction device. Using real-time computer vision, we can track and identify the users in the room, and estimate their head orientations and pointing gestures. In the near future, the room will also be equiped with speech recognition. In order to build a useful smart control room for crisis response, we are currently focusing on situation modeling for such rooms and we are investigating the target crisis response scenarios. Person tracking, head pose estimation, and gesture recognition SYSTEM ARCHITECTURE • All components run in parallel and in real time • We use several computers, with multithreading and GPU programming to obtain sufficient computational power • Our custom-built middleware takes care of network communication Our smart control room laboratory, containing videowall and cameras • A centralized situation model (blackboard) is kept, describing the situation in the room and the objects and users in it GOALS • All perceptual components can read and write in this situation model and a logic • Develop new ways of interacting with computers and support interaction between engine uses it to deduce higher level facts about the situation [6] humans using: tracking, identification, head pose, gestures, speech, and situation/user modeling [1,2] • In the near future, our control room laboratory will be extended using some of the following: speech recognition, standard workstations, a digital situation table [7], • Conduct user studies to find multimodal system setups that improve computer tablet PCs, sound, and synthesized speech supported cooperative work in a crisis response control room • Improve expressive power, ease of use, intuitiveness, speed, reliability, adaptability, and cooperation while reducing physical and mental workload • Create intelligent, context dependant user interfaces through situation modeling and user modeling • Challenges in crisis response control rooms include: team-based operation, limits to mental workload, high cost of failure, time pressure, dense/complex information, and the user acceptance problem PERCEPTION Example interaction with the videowall and a digital situation table in operation • Tracking and identification [3] • Head pose and visual focus of attention [4] This work is supported by the FhG Internal Programs under Grant No. 692 026 • Gestures and bodypose [5] (Fraunhofer Attract). It is a collaboration between the Fraunhofer Institute for • Speech recognition (future work) Information and Data Processing; Business Unit Interactive Analysis and Diagnosis and the University of Karlsruhe (TH); Faculty of Computer Science, in the framework of the five-year Fraunhofer internal project “Computer Vision for Human- Computer Interaction – Interaction in and with attentive rooms”. REFERENCES 1. Project Webpage (2009) www.iitb.fraunhofer.de/?20718 2. Stiefelhagen, Bernardin, Ekenel, Voit (2008) Tracking Identities and Attention in Smart Environments - Contributions and Progress in the CHIL Project IEEE A camera image and its corresponding segmentation and 3D voxel representation International Conference on Face and Gesture Recognition 3. Bernardin, Van de Camp, Stiefelhagen (2007) Automatic Person Detection and INTERACTION Tracking using Fuzzy Controlled Active Cameras IEEE International Conference 1. Identities are obtained through face recognition (in operation) on Computer Vision and Pattern Recognition 2. User models are used to generate personal user interfaces, obeying the user’s 4. Voit, Stiefelhagen (2008) Deducing the Visual Focus of Attention from Head Pose preferences, current tasks, and specialized knowledge (future work) Estimation in Dynamic Multi-view Meeting Scenarios 10th International 3. Using person tracking, interfaces are displayed close to the user (in operation) Conference on Multimodal Interfaces 4. Objects on the videowall are manipulated using pointing gestures and directing 5. Nickel, Stiefelhagen (2007) Visual Recognition of Pointing Gestures for Human- ones visual attention (in operation) Robot Interaction Image and Vision Computing 5. This can be combined with speech recognition and a range of different hand 6. Brdiczka, Crowley, Curín, Kleindienst (2009) Chapter: Situation Modeling, in gestures (future work) Waibel, Stiefelhagen (Eds.) Computers in the Human Interaction Loop 6. Head pose is employed to analyze the interaction of the team, for example to 7. Bader, Meissner, Tschnerney (2008) Digital Map Table with Fovea-Tablett®: determine who has been talking to whom (in operation) Smart Furniture for Emergency Operation Centers 5th International Conference on Information Systems for Crisis Response and Management 7. User-specific information can be displayed on the videowall, at the user’s current focus of attention and we can make people aware of what they haven’t seen yet (future work)