SlideShare uma empresa Scribd logo
1 de 3
Baixar para ler offline
E. Vincent Cross II, Ph.D.
Lockheed Martin/NASA, Human Factors Design Engineer
856-887-1791, crossev@gmail.com
Research Statement
1 of 3
Executive Summary
My interests consist of two interrelated tracks: Human-Computer Interaction (HCI) and Human-Robot
Interaction (HRI). These two parallel paths share much in terms of underlying themes and has led my
research focus to addressing factors that negatively impact the ability of people to effectively use
technology to augment their capabilities, for example improving the interaction of touch enabled
devices to present critical task information on small displays. Specifically, I am interested in how
ineffective interaction design and overall poor system design can compromise the performance of users.
I prefer to frame my research agenda on real problems for real-users which allows me to 1) center the
research questions that explore the problem domain; and 2) develop theories which can be empirically
tested using prototype systems and actual users. This has led to my research to having a tremendous
impact in Space (see NASA YouTube video of current research https://www.youtube.com/watch?v=-
ZAcBOf6nnE&feature=youtu.be), Government (see efforts to use my Prime III research in government
elections http://www.primevotingsystem.org/) and Military (expanded usage of robot supervision research
to other military applications) domains. Further impact of my research in HCI and HRI is noted through
my publications and in the $1.5M in funding I have acquired over 5 years from Lockheed Martin, Air
Force Research Lab (AFRL) and NASA.
Human-Computer Interaction Research
Previous HCI research investigated interaction design problems associated with displaying critical information
with Direct Recording Electronics (DRE). My research focused on users misinterpreting, overlooking, or
ignoring the original intent of the information displayed; resulting in voters perceiving that the DRE systems
were incorrectly recording their votes. By using HCI design principles, I addressed these issues with Prime III,
a multimodal DRE that allows voters to voter using speech and a touch interface
(http://www.primevotingsystem.org/) to vote. A number of empirical evaluations have been performed over
the years [1-6], each of which evaluated specific usability issues. The impact of this effort in noted in the
numerous government and national organization elections, news publications and also the continued
development of Prime III as an open source system (https://github.com/HXRL/Prime-III.git/).
My HCI research has continued to explore interaction design issues, specifically for touchscreen interfaces in
which real-estate for displaying information is limited, but the amount of information available for display is
greatly increased. This poses unique challenges for information design and navigation scheme. To anchor this
effort, my research team focused on interaction design issues associated with displaying information
effectively to astronauts performing Extravehicular Activities (EVA). Using a systematic evaluation of
multiple approaches (multi-dimensional icons e.g., Chernoff faces and stick figures, tables, color highlighting
and voice) we were able to develop design recommendation for NASA on displaying critical information on
small touchscreen interface used during an EVA. Although grounded in a space domain this work has a
significant impact on terrestrial applications, in domains such as Urban Search and Rescue (USAR), mining
and law enforcement
Currently, my HCI research is focused on assisting NASA with understanding how changes in microgravity
can affect the fine motor skills necessary to work with the touch enabled interfaces being designed for future
spacecraft e.g. the Orion space capsule. Empirical evaluations are currently being performed on ISS using
astronauts as subjects. NASA has recently highlighted the importance of this work on their YouTube channel,
see https://www.youtube.com/watch?v=-ZAcBOf6nnE&feature=youtu.be
Future HCI research plans are focused on leveraging my existing HCI work to enable Ambient Intelligent
(AMI) environments. AmI is the vision that technology will become invisible, embedded in our natural
environment, sensitive and responsive to simple and effortless interactions, adaptive to users and context, and
behaving autonomously in an undetectable and nonintrusive manner. My AmI research goal is to explore
Research Statement E. Vincent Cross II
2 of 3
human-centered design challenges that are related to understanding information presented by intelligent
systems and interacting with intelligent systems that are ubiquitous to the user’s environment. I am currently
proposing a research agenda to NASA that will investigate the benefits of using AmI as part of long duration
habitats.
Human-Robot Interaction Research
I have been able to leverage my HCI research to address related issues within Human-Robot Teaming, where
my previous and current research focus is on developing human-robot teams as a single, integrated system. I
believe that robots can act as “force multipliers” for human teams i.e., augment the capability and the
proficiency of humans, thus allowing the team to do more with less. This allows for the strengths of both the
human and the robot, while ameliorating the weaknesses of each. However, for this to become a reality I have
focused on the following key research challenges.
The ability for humans to provide Robot Supervision is critical to developing team cohesion. While the HRI
community (myself included at one point) was focused on providing a single operator with the ability to
supervise multiple robots by taking on multiple roles, my team challenged that this was not the best approach.
Instead, we theorized that a better approach would be to focus on allowing multiple humans the ability to
supervise a larger robot team i.e., Nrobots > Mpeople. With this approach, the team (humans and robots) can now
assist each other in managing their workload during changes in a mission e.g., occurrence of an anomaly. To
test this theory we developed Supervision of UxV Mission Management by Interactive Teams (SUMMIT) and
applied it to the Littoral Combat Ship (LCS) Mine Countermeasures (MCM) mission package [8]. Multiple
empirical evaluations were performed with unmanned surface vehicles and sailors from Mine Warfare
Detachment 1 of the LCS MCM mission package [9] showing the benefit to the team’s performance when
using this approach. The success of this research is noted in the expanded use of our supervised robot approach
(SUMMIT) into other Navy domains.
For robots to be part of a team they need to be able to Share Information in a way that makes sense to their
human team members. To accomplish this, I propose that robots need to learn how to think like humans. To
address this challenge my team and I developed a novel world model representation that provides robots with
the ability to merge semantic information communicated by its human teammates with its own sensor data.
The robots can then use the resulting “operating picture” to drive planning and decision-making for unfamiliar
environments. To evaluate the feasibility of the world model framework, several proof-of-concept simulations
were designed to both demonstrate and evaluate the capabilities of the world model and subsequently to help
the design process. The results of these simulations showed that robots could formulate a plan based on
semantic information supplied by humans and from robots with different sensor data [10].
For teams to be effective there also needs to be Trust. Oftentimes we lose trust in robotic systems due to our
inability to comprehend the reason for their behavior. This has led to issues with the adoption of more
autonomous robots within DoD, NASA and other domains. My research has been exploring approaches that
would allow robots to explain the reason for their actions. Leveraging the previous shared information work,
we developed an approach that provided robots with the capability to explain their behavior to an operator in
human-understandable terms. This removes the guess work from the operator, by providing the why of a
behavior and improving predictability by supporting insight into potential future robot actions [11].
Future HRI research plans continue to focus on integrating robots into human teams, with a further
progression into improving the overall user’s experience with robotic systems. I am hypothesizing that by
focusing on the user experience i.e., hedonic and pragmatic qualities of the interaction design, we can better
develop robotic systems that humans choose to use and enjoy using. This is leading to exciting new research
questions on how to measure the effectiveness of human-robot collaborations using UX metrics, how should
robots resolve conflicts within human-robot teams, and how should task be assigned between humans and
robots when accounting for pragmatic and hedonic attributes of automated systems. Each of these research
topics impact the overall experience humans have with robots and can affect the ability of the operator to use
the system optimally. I currently have proposals submitted to NASA, AFRL, and ARL to explore research in
this area.
Research Statement E. Vincent Cross II
3 of 3
References
1. Cross, E.V., Dawkins, S., Rogers, G., McClendon, J., Sullivan, T., Tian, Y., Rouse, K., Gilbert, J. (2009) Everyone
Counts Universal Access to Voting. In Stephanidis, C., editors, Proceedings of Human Computer Interaction
International, Vol. 5616 of Lecture Notes in Computer Science, pp. 324-332. Springer-Heidelberg, 2009.
2. Dawkins, S., Cross, E.V., Rogers, G., McClendon, J., Gilbert, J. (2009) Prime III: An Innovative Electronic Voting
Interface. In Proceedings of the 2009 International Conference on Intelligent User Interfaces. (IUI ‘09’). Sanibel
Island, FL. USA, February 8 – 11, 2009.
3. Gilbert, J.E., Williams, P., Cross, E.V., Mkpong-Ruffin, I., McMillian, Y., & Gupta, P. (2008). Usability and Security
in Electronic Voting. E-Voting: Perspectives and Experience, Icfai University Press, pp. 74-80.
4. Cross, E.V., Rogers, G., McClendon, J., Mitchell, W., Rouse, K., Gupta, P., Williams, P., Mkpong-Ruffin, I.,
McMillian, Y., Neely, E., Lane, J., Blunt, H. & Gilbert, J.E. (2007) Prime III: One Machine, One Vote for
Everyone. VoComp 2007, Portland, OR. July 16, 2007.
5. McMillian, Y., Williams, P., Cross, E.V., Mkpong-Ruffin, I., Nobles, K., Gupta, P., & Gilbert J.E. (2007) Prime III:
Where Usable Security and Electronic Voting Meet. Usable Security (USEC ‘07), Lowlands, Scarborough,
Trinidad/Tobago. February 15-16, 2007.
6. Cross, E.V., McMillian, Y., Gupta, P., Williams, P., Nobles, K. & Gilbert, J.E.(2007) Prime III: A User Centered
Voting System. ACM Computer Human Interaction (CHI '07) Extended Abstracts Works In Progress Session, San
Jose, CA, May 2, 2007.
7. Cross, E.V., Gilbert, J. (2008) Effective Supervision of a Robot Team through User Interface Design. ACM
Southeast Conference (ACMSE). Auburn, AL. March 28 – 29, 2008
8. Chevalier, B., Cross, E. V., Moffitt, V. Z., Lomas, M., Craven, P., Garrett, R., Kopack, M., Franke, J. L., & Taylor, J.,
T., “SUMMIT: An Open Architecture Framework that Integrates Stand-alone Applications and Enables
Operator Workload Balancing”. Proceedings of Association for Unmanned Vehicle Systems International (AUVSI
2012). Las Vegas, NV. August 6-9 2012.
9. Lomas, M., Moffitt, V. Z., Craven, P., & Cross, E. V., Franke, J. L., Taylor, J., T., (2011) SUMMIT: A Collaborative
Environment for Team-based Control of Heterogeneous Robots. In 6th
ACM/IEE International Conference on
Human Robot Interaction (HRI 2011). Lausanne, Switzerland. March 6-9 2011
10. Lomas, M., Cross, E., J. Darvill, R. Garrett, M. Kopack, and K. Whitebread, "A Robotic World Model Framework
Designed to Facilitate Human-Robot Communication", Proceedings of the 12th
Annual SIGdial Meeting on
Discourse and Dialogue Meeting, Portland, Oregon, pp. 301–306, June 17-18, 2011
11. Lomas, M., Chevalier, B., Cross, E. V., Hoare, J., Garrett, R., & Kopack, M., “Explaining Robot Actions”, In
Proceedings of the 7th
ACM/IEE International Conference on Human Robot Interaction (HRI 2012). Boston, MA.
March 5-8 2012.

Mais conteúdo relacionado

Destaque

How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental Health
ThinkNow
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie Insights
Kurio // The Social Media Age(ncy)
 

Destaque (20)

2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot
 
Everything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPTEverything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPT
 
Product Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsProduct Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage Engineerings
 
How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental Health
 
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfAI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
 
Skeleton Culture Code
Skeleton Culture CodeSkeleton Culture Code
Skeleton Culture Code
 
PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024
 
Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)
 
How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie Insights
 
Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024
 
5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary
 
ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd
 
Getting into the tech field. what next
Getting into the tech field. what next Getting into the tech field. what next
Getting into the tech field. what next
 
Google's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentGoogle's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search Intent
 
How to have difficult conversations
How to have difficult conversations How to have difficult conversations
How to have difficult conversations
 
Introduction to Data Science
Introduction to Data ScienceIntroduction to Data Science
Introduction to Data Science
 
Time Management & Productivity - Best Practices
Time Management & Productivity -  Best PracticesTime Management & Productivity -  Best Practices
Time Management & Productivity - Best Practices
 
The six step guide to practical project management
The six step guide to practical project managementThe six step guide to practical project management
The six step guide to practical project management
 
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
 

Research Statement_cross_20151203

  • 1. E. Vincent Cross II, Ph.D. Lockheed Martin/NASA, Human Factors Design Engineer 856-887-1791, crossev@gmail.com Research Statement 1 of 3 Executive Summary My interests consist of two interrelated tracks: Human-Computer Interaction (HCI) and Human-Robot Interaction (HRI). These two parallel paths share much in terms of underlying themes and has led my research focus to addressing factors that negatively impact the ability of people to effectively use technology to augment their capabilities, for example improving the interaction of touch enabled devices to present critical task information on small displays. Specifically, I am interested in how ineffective interaction design and overall poor system design can compromise the performance of users. I prefer to frame my research agenda on real problems for real-users which allows me to 1) center the research questions that explore the problem domain; and 2) develop theories which can be empirically tested using prototype systems and actual users. This has led to my research to having a tremendous impact in Space (see NASA YouTube video of current research https://www.youtube.com/watch?v=- ZAcBOf6nnE&feature=youtu.be), Government (see efforts to use my Prime III research in government elections http://www.primevotingsystem.org/) and Military (expanded usage of robot supervision research to other military applications) domains. Further impact of my research in HCI and HRI is noted through my publications and in the $1.5M in funding I have acquired over 5 years from Lockheed Martin, Air Force Research Lab (AFRL) and NASA. Human-Computer Interaction Research Previous HCI research investigated interaction design problems associated with displaying critical information with Direct Recording Electronics (DRE). My research focused on users misinterpreting, overlooking, or ignoring the original intent of the information displayed; resulting in voters perceiving that the DRE systems were incorrectly recording their votes. By using HCI design principles, I addressed these issues with Prime III, a multimodal DRE that allows voters to voter using speech and a touch interface (http://www.primevotingsystem.org/) to vote. A number of empirical evaluations have been performed over the years [1-6], each of which evaluated specific usability issues. The impact of this effort in noted in the numerous government and national organization elections, news publications and also the continued development of Prime III as an open source system (https://github.com/HXRL/Prime-III.git/). My HCI research has continued to explore interaction design issues, specifically for touchscreen interfaces in which real-estate for displaying information is limited, but the amount of information available for display is greatly increased. This poses unique challenges for information design and navigation scheme. To anchor this effort, my research team focused on interaction design issues associated with displaying information effectively to astronauts performing Extravehicular Activities (EVA). Using a systematic evaluation of multiple approaches (multi-dimensional icons e.g., Chernoff faces and stick figures, tables, color highlighting and voice) we were able to develop design recommendation for NASA on displaying critical information on small touchscreen interface used during an EVA. Although grounded in a space domain this work has a significant impact on terrestrial applications, in domains such as Urban Search and Rescue (USAR), mining and law enforcement Currently, my HCI research is focused on assisting NASA with understanding how changes in microgravity can affect the fine motor skills necessary to work with the touch enabled interfaces being designed for future spacecraft e.g. the Orion space capsule. Empirical evaluations are currently being performed on ISS using astronauts as subjects. NASA has recently highlighted the importance of this work on their YouTube channel, see https://www.youtube.com/watch?v=-ZAcBOf6nnE&feature=youtu.be Future HCI research plans are focused on leveraging my existing HCI work to enable Ambient Intelligent (AMI) environments. AmI is the vision that technology will become invisible, embedded in our natural environment, sensitive and responsive to simple and effortless interactions, adaptive to users and context, and behaving autonomously in an undetectable and nonintrusive manner. My AmI research goal is to explore
  • 2. Research Statement E. Vincent Cross II 2 of 3 human-centered design challenges that are related to understanding information presented by intelligent systems and interacting with intelligent systems that are ubiquitous to the user’s environment. I am currently proposing a research agenda to NASA that will investigate the benefits of using AmI as part of long duration habitats. Human-Robot Interaction Research I have been able to leverage my HCI research to address related issues within Human-Robot Teaming, where my previous and current research focus is on developing human-robot teams as a single, integrated system. I believe that robots can act as “force multipliers” for human teams i.e., augment the capability and the proficiency of humans, thus allowing the team to do more with less. This allows for the strengths of both the human and the robot, while ameliorating the weaknesses of each. However, for this to become a reality I have focused on the following key research challenges. The ability for humans to provide Robot Supervision is critical to developing team cohesion. While the HRI community (myself included at one point) was focused on providing a single operator with the ability to supervise multiple robots by taking on multiple roles, my team challenged that this was not the best approach. Instead, we theorized that a better approach would be to focus on allowing multiple humans the ability to supervise a larger robot team i.e., Nrobots > Mpeople. With this approach, the team (humans and robots) can now assist each other in managing their workload during changes in a mission e.g., occurrence of an anomaly. To test this theory we developed Supervision of UxV Mission Management by Interactive Teams (SUMMIT) and applied it to the Littoral Combat Ship (LCS) Mine Countermeasures (MCM) mission package [8]. Multiple empirical evaluations were performed with unmanned surface vehicles and sailors from Mine Warfare Detachment 1 of the LCS MCM mission package [9] showing the benefit to the team’s performance when using this approach. The success of this research is noted in the expanded use of our supervised robot approach (SUMMIT) into other Navy domains. For robots to be part of a team they need to be able to Share Information in a way that makes sense to their human team members. To accomplish this, I propose that robots need to learn how to think like humans. To address this challenge my team and I developed a novel world model representation that provides robots with the ability to merge semantic information communicated by its human teammates with its own sensor data. The robots can then use the resulting “operating picture” to drive planning and decision-making for unfamiliar environments. To evaluate the feasibility of the world model framework, several proof-of-concept simulations were designed to both demonstrate and evaluate the capabilities of the world model and subsequently to help the design process. The results of these simulations showed that robots could formulate a plan based on semantic information supplied by humans and from robots with different sensor data [10]. For teams to be effective there also needs to be Trust. Oftentimes we lose trust in robotic systems due to our inability to comprehend the reason for their behavior. This has led to issues with the adoption of more autonomous robots within DoD, NASA and other domains. My research has been exploring approaches that would allow robots to explain the reason for their actions. Leveraging the previous shared information work, we developed an approach that provided robots with the capability to explain their behavior to an operator in human-understandable terms. This removes the guess work from the operator, by providing the why of a behavior and improving predictability by supporting insight into potential future robot actions [11]. Future HRI research plans continue to focus on integrating robots into human teams, with a further progression into improving the overall user’s experience with robotic systems. I am hypothesizing that by focusing on the user experience i.e., hedonic and pragmatic qualities of the interaction design, we can better develop robotic systems that humans choose to use and enjoy using. This is leading to exciting new research questions on how to measure the effectiveness of human-robot collaborations using UX metrics, how should robots resolve conflicts within human-robot teams, and how should task be assigned between humans and robots when accounting for pragmatic and hedonic attributes of automated systems. Each of these research topics impact the overall experience humans have with robots and can affect the ability of the operator to use the system optimally. I currently have proposals submitted to NASA, AFRL, and ARL to explore research in this area.
  • 3. Research Statement E. Vincent Cross II 3 of 3 References 1. Cross, E.V., Dawkins, S., Rogers, G., McClendon, J., Sullivan, T., Tian, Y., Rouse, K., Gilbert, J. (2009) Everyone Counts Universal Access to Voting. In Stephanidis, C., editors, Proceedings of Human Computer Interaction International, Vol. 5616 of Lecture Notes in Computer Science, pp. 324-332. Springer-Heidelberg, 2009. 2. Dawkins, S., Cross, E.V., Rogers, G., McClendon, J., Gilbert, J. (2009) Prime III: An Innovative Electronic Voting Interface. In Proceedings of the 2009 International Conference on Intelligent User Interfaces. (IUI ‘09’). Sanibel Island, FL. USA, February 8 – 11, 2009. 3. Gilbert, J.E., Williams, P., Cross, E.V., Mkpong-Ruffin, I., McMillian, Y., & Gupta, P. (2008). Usability and Security in Electronic Voting. E-Voting: Perspectives and Experience, Icfai University Press, pp. 74-80. 4. Cross, E.V., Rogers, G., McClendon, J., Mitchell, W., Rouse, K., Gupta, P., Williams, P., Mkpong-Ruffin, I., McMillian, Y., Neely, E., Lane, J., Blunt, H. & Gilbert, J.E. (2007) Prime III: One Machine, One Vote for Everyone. VoComp 2007, Portland, OR. July 16, 2007. 5. McMillian, Y., Williams, P., Cross, E.V., Mkpong-Ruffin, I., Nobles, K., Gupta, P., & Gilbert J.E. (2007) Prime III: Where Usable Security and Electronic Voting Meet. Usable Security (USEC ‘07), Lowlands, Scarborough, Trinidad/Tobago. February 15-16, 2007. 6. Cross, E.V., McMillian, Y., Gupta, P., Williams, P., Nobles, K. & Gilbert, J.E.(2007) Prime III: A User Centered Voting System. ACM Computer Human Interaction (CHI '07) Extended Abstracts Works In Progress Session, San Jose, CA, May 2, 2007. 7. Cross, E.V., Gilbert, J. (2008) Effective Supervision of a Robot Team through User Interface Design. ACM Southeast Conference (ACMSE). Auburn, AL. March 28 – 29, 2008 8. Chevalier, B., Cross, E. V., Moffitt, V. Z., Lomas, M., Craven, P., Garrett, R., Kopack, M., Franke, J. L., & Taylor, J., T., “SUMMIT: An Open Architecture Framework that Integrates Stand-alone Applications and Enables Operator Workload Balancing”. Proceedings of Association for Unmanned Vehicle Systems International (AUVSI 2012). Las Vegas, NV. August 6-9 2012. 9. Lomas, M., Moffitt, V. Z., Craven, P., & Cross, E. V., Franke, J. L., Taylor, J., T., (2011) SUMMIT: A Collaborative Environment for Team-based Control of Heterogeneous Robots. In 6th ACM/IEE International Conference on Human Robot Interaction (HRI 2011). Lausanne, Switzerland. March 6-9 2011 10. Lomas, M., Cross, E., J. Darvill, R. Garrett, M. Kopack, and K. Whitebread, "A Robotic World Model Framework Designed to Facilitate Human-Robot Communication", Proceedings of the 12th Annual SIGdial Meeting on Discourse and Dialogue Meeting, Portland, Oregon, pp. 301–306, June 17-18, 2011 11. Lomas, M., Chevalier, B., Cross, E. V., Hoare, J., Garrett, R., & Kopack, M., “Explaining Robot Actions”, In Proceedings of the 7th ACM/IEE International Conference on Human Robot Interaction (HRI 2012). Boston, MA. March 5-8 2012.