How can acoustic and visual features be combined with text-based search methods applied on automatic transcripts and subtitles and help to retrieve television content.
Multimodal Features for Linking Television Content
1. Multimodal Features
for Linking Television Content
Petra Galuščáková
galuscakova@ufal.mff.cuni.cz
Institute of Formal and Applied Linguistics
NLP Applications 11. 5. 2017
2. 2
Introduction
● Video: 80% of internet traffic in 2019
● It would take a single person 5 million years to view all the
video content crossing the internet during one month
● Mainly generated by streaming services such as Netflix,
Hulu Plus and Amazon Prime
● Followed by YouTube, Vimeo and Vine
● Each minute
● 77,160 hours of videos streamed by Netflix
● More than a million videos played by Vine users
● 300 hours of videos uploaded to YouTube
– rose from 6 hours in 2007, to 24-35 hours in 2010
3. 3
Motivation
● Increasing number of audio-visual document
● Wide variety of typed of videos
● Progress in ASR and visual processing
systems
● Lack of effective systems for retrieving
information stored in these documents
5. 5
Mutlimedia Retrieval
● A set of methods for understanding information
stored in the various media in a manner comparable
with human understanding
● Semantic content of the documents needs to be at
first mined using automatic processing
● ASR, acoustic processing, image processing, face recognition,
signal processing, video content analysis, ...
● Videos
● Different modalities
● No structure
6. 6
Search in Audio-Visual
Documents
● Input:
● Data collection (video recordings)
● Query
– Given as text
● Output:
● Relevant segments (passages) of documents
7. 7
Search Examples
● “Medieival history of why castles were first built”
● “FA cup final, old & current, comparison. History of
football”
● “animal park, kenya marathon, wildlife reserve”
8. 8
Speech Retrieval vs.
Spoken Term Detection
● Spoken Term Detection (Keyword Spotting) is often
not sufficient
● Documents must contain the exact word (or
sometimes different word forms)
E.g. “Rover finds bulletproof evidence of water on
early Mars” vs. “A bulletproof vest is an item of
personal armor that helps absorb the impact from
firearm-fired projectiles”
● Retrieval techniques allow to exploit – e.g. visual
content
9. 9
Hyperlinking
● Input:
● Data collection (video
recordings)
● Query segment
● Output:
● Segments similar to the query
segment
10. 10
Hyperlinking Definition
● Hyperlink: an electronic link providing
direct access from one distinctively marked
place in a hypertext or hypermedia
document to another in the same or a
different document.
● The source marked place of the link: anchor
● “’give me more information about this
anchor’ instead of ’give me more based on
this anchor or entity’”
11. 11
Recommender Systems
● Focused on entertainment
● YouTube
● Generated by using a user’s personal activity
(watched, favourited, liked videos) [*]
● TED Talks
● Related talks manually selected by the
editors
[*] James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy, Taylor Van Vleet, Ullas
Gargi, Sujoy Gupta, Yu He, Mike Lambert, Blake Livingston, and Dasarathi Sampath. 2010. The
YouTube video recommendation system. In Proc. of RecSys '10. ACM, New York, NY, USA,
293-296.
12. 12
Our Approach
● Content-based search
● Combining speech, acoustic and visual
information
● Retrieval instead of word-spotting
● Passage Retrieval
● Retrieve relevant segments instead of
relevant documents
15. 15
● Search and Hyperlinking (2012-2014)
● Video Hyperlinking (2015-)
Multimedia Benchmarks
16. 16
Search and Hyperlinking
Task
● The main goal of the Search Subtask
● To find passages relevant to a user’s interest given by a
textual query in a large set of audio-visual recordings
● And of the Hyperlinking Subtask:
● To find more passages similar to the retrieved ones
● Scenario:
● A user wants to find a piece of information relevant to a
given query in a collection of TV programmes (Search
subtask)
● And then navigate through a large archive using
hyperlinks to the retrieved segments (Hyperlinking
subtask)
17. 17
BBC Broadcast
Data
● Broadcast between 1. 4. 2008 and
31. 7. 2008
● News, documentaries, series,
entertainment programmes, quiz
shows, cookery shows, sport
programmes, ...
● Subtitles
● Three ASR transcripts
● LIMSI, LIUM, NST-Sheffield
● Metadata
● Prosodic features
● Shots, keyframes, visual concepts
18. 18
Evaluation
● Using crowdsourcing
● MAP-bin and MAP-tol measures
● Adaptations of the MAP measure
● Proposed for evaluation of video content retrieval to
allow a segment retrieved near the relevant segment
(but not necessarily overlapping it) to also be
marked as relevant.
19. 19
● Retrieve relevant
segments
● Divide documents into
50 and 60-second long
segments
● A new segment is created each 10 seconds
● Index textual segments
● A query segment is transformed to textual query
System Description
20. 20
● Terrier IR Framework
● Hiemstra Language Model
● Porter Stemmer, Stopwords
● Post-filter retrieved segments
System Description
21. 21
Passage Retrieval
● Documents are automatically divided into shorter
segments
● Segments serve as documents in the traditional IR
setup
● The segmentation is crucial for the quality of
the retrieval
– Especially the segment length
24. 25
Speech Retrieval Problems
1. Restricted vocabulary
● Data and query segment expansion
● Combination of transcripts
2. Lack if reliability
● Utilizing only the most confident words of the
transcripts
● Using confidence score
3. Lack of content
● Audio music information
● Acoustic similarity
25. 26
1. Restricted Vocabulary
● Number of unique words in transcripts is
almost three times smaller than in subtitles.
● Low frequency words are expected to be the
most informative for information retrieval.
● Expand data and query segments
● Metadata
● Content surrounding the query segment
● Combine different transcripts
26. 27
Data and Query Segment
Expansion
● Metadata
● Concatenate each data and query segment with
metadata of the corresponding file
● Title, episode title, description, short episode
synopsis, service name and program variant
● Content surrounding the query segment
● Use 200 seconds before and after the query
segment
29. 30
Data and Query Segment
Expansion Results
● The improvement is significant in terms of both
measures
● Expansion using metadata and context can
substantially reduce query expansion problem.
● The highest MAP-tol score was achieved on the LIUM
transcripts.
● Even though relatively high WER
● Metadata and context produce much higher relative
improvement to the automatic transcripts than to
subtitles.
● MAP-bin score corresponds with the WER
31. 32
Transcripts Combination
● The combination is generally helpful.
● Even though the high score achieved by the
LIUM transcripts
● The overall highest MAP-bin score was
achieved using union of LIMSI and NST
transcripts.
● Outperforms results achieved with subtitles
32. 33
2. Transcripts Reliability
● WER
● LIMSI: 57.5%
● TED-LIUM: 65.1%
● NST-Sheffield: 58.6%
● Word variants
● Word confidence
33. 34
Word Variants
● Compare utilization of the first, most reliable word
and all variants in LIMSI transcripts.
34. 35
Word Confidence
● Only use words with high confidence scores
● Only words from LIMSI and LIUM transcripts with a
confidence score higher than a given treshold
● Increased both scores for development set
● It did not outperform fully transcribed test data
● We also experimented with voting
35. 36
3. Lack of Content
● We only use content of subtitles/transcripts
● A wide range of acoustic attributes can also
be utilized: applause, music, shouts,
explosions, whispers, background noise, …
● Acoustic fingerprinting
● Acoustic similarity
37. 38
Acoustic Fingerprinting
1) Minimize noise in each query segment
● Query segments were divided into 10-second long
passages; a new passage was created each second
2) Submit sub-segments to Doreso API
3) Retrieve song title, artists and album
● Development set: 4 queries out of 30
● Test set: 10 queries out of 30
4) Concatenate title, artist and album name with
query segment text
● Both retrieval scores drop
38. 39
Acoustic Similarity
Motivation
● Retrieve identical acoustic
segments
● E.g. signature tunes and jingles
● Detect semantically related
segments
● E.g. segments containing action
scenes and music
39. 40
Acoustic Similarity
● Calculate similarity between data and query vector
sequences of prosodic features
● Find the most similar sequences near beginning
● Linearly combine the highest acoustic similarity with
text-based similarity score
● MAP-bin: 0.2689 → 0.2687, MAP-tol: 0.2465 → 0.2473
42. 43
Visual Features
● Similar setting: Feature Signatures
● Object recognition: CNN descriptors
● Concept detection: CNN descriptors
● Same faces: SIMILE descriptors
43. 44
Feature Signatures
● Approximate distribution of color and texture in the image
● Work especially well for recognition of similar background
and setting
● Calculated distance between each keyframe in query
segment and each keyframe in data segment.
49. 50
Object Recognition
● Similarity between keyframes calculated also using
deep convolutional network (AlexNet)
● Last layer features used for calculating similarity
● Improved results but worse than Feature
Signatures
50. 51
Concept Detection
● System provided by MUNI:
● Retrieve images similar to the keyframe
● Use descriptions of similar images
– Text analysis and word semantic relationships
● e.g. people, indoors, young, two, canadian, plant, ...
● Concepts with higher confidence scores and
concepts with restricted number of occurrence
● MAP-bin: 0.2333 → 0.2368, MAP-tol: 0.1375 → 0.1638
51. 52
Faces Recognition
● Created by Eyedea Recognition Framework
● Faces were first detected and geometrically aligned
with a canonical pose
● SIMILE descriptors were calculated
● A set of face descriptors representing person
identities were available for each face
● Faces were then compared by L2 distance on
calculated descriptors.
● MAP-bin: 0.2051 → 0.2088 MAP-tol: 0.1162 → 0.1281
54. 55
SHAMUS
● Open source tool for
● text-based search Search,
● retrieval of the topically related
Hyperlinks and
● determination of the most important
Anchoring segments in videos.
● Demo running on 1219 TED talks
http://ufal.mff.cuni.cz/shamus
56. 57
SHAMUS
● Based on subtitles/transcripts
● Uses Terrier framework
● Aimed at media professionals
● Uses video segments
● 1-minute long, overlapping
● Methods used at MediaEval and TRECVid
58. 59
Anchoring
● Find most interesting and important
segments of videos
● Further use in hyperlinking
● Convert metadata to query
● Marked as chapters
59. 60
Hyperlinking
● Retrieve segments similar to each
anchoring segment on the fly.
● Convert segment to a textual query.
● 20 most frequent words (stopwords are
filtered out)