This document discusses a novel approach to gesture recognition using Neo4j graph database integrated with the Leap Motion controller. It summarizes the limitations of current gesture recognition techniques and proposes representing hand motions as graphs stored in Neo4j to overcome issues like data inconsistencies and storage requirements. Preliminary results show 93.4% accuracy for constrained digit recognition and learning rates for gesture interfaces are 40% faster than non-gesture interfaces. The approach aims to advance the future of natural human-interface design.
1. Neo4j
in
the
Future
of
Interaction
Design
A
novel
approach
to
gesture
recognition
integrating
Neo4j
with
the
Leap
Motion
2. This
Talk
! Introduction
! Interaction
Design
! The
Tyranny
of
Finger-‐On-‐Glass
! The
Leap
Motion
! Promises
and
Limitations
! Gesture
Recognition
! Current
State-‐of-‐the-‐Art
! Building
a
New
Strategy
for
the
Leap
! Conclusions
14. But
Something
is
Rotten
in
Denmark
! Complex
Motions
are
infeasible
! Self-‐Obfuscation
is
a
huge
problem
! Interface
is
surprisingly
exhausting
! Drivers
are
proprietary
and
imperfect
! Bounding
box
is
small
! Data
is
fundamentally
inconsistent
18. Problems
with
Classical
Approaches
to
Gestures
! Geared
towards
easily
benchmarked,
previously
studied
problems.
! Primarily
Developed
by
narrowly-‐defined
industry
applications
20. Problems
With
HMMs
! State
depends
only
on
current
state,
intuitive
hand
gestures
are
inherently
hysteretic.
! Depends
on
discrete
gesture-‐identification,
no
sense
of
“variations
on
a
theme”
! Storage
space
exponentiates
when
faced
with
inconsistent
data-‐streams
! NOT
built
for
the
Leap
21. Size?
! Minimum
6
DoF
per
finger
+
7
for
the
palm
! 2
hands,
even
assuming
only
two
modes
of
motion:
1.9
*
1022
23. Pros
! Basic
mathematics
is
close
enough
to
that
of
HMMs
that
much
of
the
established
infrastructure
can
be
leveraged
! Path
similarity
doesn’t
rely
on
consistent
data
streams
and
allows
for
regression
testing
! Database
can
easily
be
trimmed
to
reduce
size
concerns
24. Cons
! The
Leap
is
very
fast,
and
sub
graph
comparisons
are
computationally
intensive
! Lots
of
data
that
isn’t
hugely
useful
to
us.
! Continuous
data
ends
up
being
very
sensitive
to
slight
perturbations
in
paths
! A
few
orders
of
magnitude
down,
but
just
a
few
26. Is
That
Really
a
Big
Difference
Though?
! Syncs
up
well
with
our
natural
perception
of
gestures
! Reduction
of
almost
7
full
orders
of
magnitude
for
comprehensive
gesture
coverage
! Diffs
from
node
epicenters
are
more
robust
and
improve
regression
results
! Greatly
reduces
number
of
calls
made
to
REST
API
27. Preliminary
Results
! Constrained
digit
recognition
benchmarked
at
93.4%
! Maximum
latency
for
immersion
is
~120
ms
! Learning
rates
for
gesture
based
interface
is
about
40%
faster
than
for
gesture-‐free
interfaces
! Partnership
with
zSpace
! Continued
mentoring
from
SolidWorks
and
Belmont
Labs
founder,
Scott
Harris.