SlideShare uma empresa Scribd logo
1 de 2
Baixar para ler offline
AURALISATION OF DEEP CONVOLUTIONAL NEURAL NETWORKS:
LISTENING TO LEARNED FEATURES
Keunwoo Choi
Queen Mary
University of London
keunwoo.choi
@qmul.ac.uk,
Jeonghee Kim
Naver Labs,
South Korea
jeonghee.kim
@navercorp.com
George Fazekas
Queen Mary
University of London
g.e.fazekas
@qmul.ac.uk
Mark Sandler
Queen Mary
University of London
m.sandler
@qmul.ac.uk
ABSTRACT
Deep learning has been actively adopted in the field of mu-
sic information retrieval, e.g. genre classification, mood
detection, and chord recognition. Deep convolutional neu-
ral networks (CNNs), one of the most popular deep learn-
ing approach, also have been used for these tasks. How-
ever, the process of learning and prediction is little under-
stood, particularly when it is applied to spectrograms. We
introduce auralisation of CNNs to understand its underly-
ing mechanism.
1. INTRODUCTION
In the field of computer vision, deep learning approaches
become de facto standard after convolutional neural net-
works (CNNs) showed break-through results in ImageNet
competition in 2012 [3]. It rapidly became popular while
the reason of success was not completely understood.
One effective way to understand and explain the CNNs
was introduced in [5], where the features in deeper levels
are visualised by a method called deconvolution. By de-
convolving and un-pooling layers, it enables people to see
which part of the input image are focused on by each filter.
However, spectrograms have not been analysed by this
approach. Moreover, it is not clear what can be understood
by deconvolving spectrograms. The information from ’see-
ing’ a part of spectrograms can be extended by auralising
the convolutional filters.
In this paper, we introduce the procedure and results of
deconvolution of spectrograms. Furthermore, we propose
auralisastion of filters by extending it to time-domain re-
construction. In Section 2, the background of CNNs and
deconvolution are explained. The proposed auralisation
method is introduced in Section 3. The results are dis-
cussed in Section 4.
2. BACKGROUND
The majority of research of CNNs on audio signal uses 2D
time-frequency representation as input data, considering it
c Keunwoo Choi, Jeonghee Kim, George Fazekas, Mark
Sandler. Licensed under a Creative Commons Attribution 4.0 Interna-
tional License (CC BY 4.0). Attribution: Keunwoo Choi, Jeonghee
Kim, George Fazekas, Mark Sandler. “Auralisation of Deep Convolu-
tional Neural Networks: Listening to Learned Features”, 16th Interna-
tional Society for Music Information Retrieval Conference, 2015.
Figure 1. Deconvolution of CNNs trained for image clas-
sification. The filters’ response and a corresponding part
in the input images are shown respectively on the left and
right for each layer. Image courtesy of [5].
as an image. Various types of representations have been
used including Short-time Fourier transform (STFT), Mel-
spectrogram and constant-Q transform (CQT). In [4], for
example, 80-by-15 (mel-band-by-frame) mel-spectrogram
is used with 7-by-3 and 3-by-3 convolutions for onset de-
tection and CQT is used with 5-by-25 and 5-by-13 convo-
lutions for chord recognition in [2].
Visualisation of CNNs was introduced in [5], which
showed how high-level features (postures/objects) are com-
bined from low-level features (lines/curves), as illustrated
in Figure 1. Visualisation of CNNs helps not only to un-
derstand the process inside the black box model, but also
to decide hyper-parameters of the networks. For exam-
ple, redundancy or deficiency of the capacity of the net-
works, which is limited by hyper-parameters such as the
number of layers and filters, can be judged by inspecting
the learned filters. Network visualisation provides useful
information since fine tuning hyper-parameters is among
the most crucial factors in designing CNNs, while it is a
computationally expensive process.
3. AURALISATION OF FILTERS
The spectrograms used in CNNs can be also deconvolved
as well. Unlike visual images however, a deconvolved
spectrogram does not generally facilitate an intuitive ex-
planation. This is because, first, seeing a spectrogram does
not necessarily provide clear intuition that is comparable to
observing an image. Second, detecting edges of a spectro-
gram, which is known to happen at the first layer in CNNs,
results in removing components of the spectrograms.
This paper has been supported by EPSRC Grant EP/L019981/1, Fus-
ing Audio and Semantic Technologies for Intelligent Music Production
and Consumption.
Figure 2. Filters at the first layer of CNNs trained for genre
classification, initial (on the left) and learned filters (on the
right).
To solve this problem, we propose to reconstruct the
audio signal using a process called auralisation. This re-
quires an additional stage for inverse-transformation of a
deconvolved spectrogram. The phase information is pro-
vided by the phase of the original STFT, following the
generic approach in spectrogram-based sound source sepa-
ration algorithms. STFT is therefore recommended which
allows us to later obtain a time-domain signal easily.
4. EXPERIMENTS AND DISCUSSION
We implemented a CNNs-based genre classification algo-
rithm using a dataset obtained from Naver Music . Three
genres (ballade, dance, and hiphop) were classified using
8,000 songs in total. 10 clips of 4 seconds were extracted
for each song, generating 80,000 data samples by STFT
with 512-point using windowed Fast Fourier Transform
and 50% hop size. 6,600/700/700 songs were designated
as training/validation/test sets respectively. As a reference
of performance, CNNs with 5 layers, 3-by-3 filters, max-
pooling with size and stride of 2 was used. This system
showed 75% of accuracy with a loss of 0.62.
4.1 Visualisation of First-layer Filters
A 4-layer CNNs was built with larger filter sizes at the
first layer. Since max-pooling layers are involved with in
deeper layers, filters at deeper layers can be only illustrated
when input data is given. To obtain a more intuitive visu-
alisation, large convolutional filters were used, as shown
in [1]. This system showed 76% of accuracy.
Figure 2 shows the visualisation of the first layer, which
consists of 12-by-12 filters. Filters are initialised with uni-
formly distributed random values, which resemble white
noise as shown on the left. After training, some of the fil-
ters develop patterns that can detect certain shapes. For
image classification tasks, the detectors for edges with dif-
ferent orientations usually are observed since the outlines
of objects are a useful cue for the task. Here however, sev-
eral vertical edge detectors are observed as shown on the
right. This may be because the networks learn to extract
not only edges but more complex patterns from spectro-
grams.
4.2 Auralisation of Filters
Fig 3 shows original and deconvolved signals and spec-
trograms of Bach’s English Suite No. 1: Prelude from
http://music.naver.com, a Korean music streaming service
Figure 3. Original and deconvolved signals and spectro-
grams with ground-truth onsets annotated (on the left)
7th feature at the 1st layer. By listening to these decon-
volved signals, it turns out that this feature provides an
onset detector-like filter. It can also be explained by vi-
sualising the filter. Vertical edge detectors can work as a
crude onset detector when it is applied to spectrograms,
since rapid change along time-axis will pass the edge de-
tector while the sustain parts will be filtered out.
There was a different case at deeper layers, where high-
level information can be expressed in deconvolved spectro-
grams. One of the features at the deepest layer was mostly
deactivated by hiphop music. However, not all the features
can be easily interpreted by listening to the signal.
5. CONCLUSIONS
We introduce auralisation of CNNs, which is an extension
to CNNs visualisation. This is done by inverse-transformation
of a deconvolved spectrogram to obtain a time-domain au-
dio signal. Listening the audio signal enables researchers
to understand the mechanism of CNNs when they are ap-
plied to spectrograms. Further research will include more
in depth interpretation of the learned features.
6. REFERENCES
[1] Luiz Gustavo Hafemann. An analysis of deep neural
networks for texture classification. 2014.
[2] Eric J Humphrey and Juan P Bello. Rethinking au-
tomatic chord recognition with convolutional neu-
ral networks. In Machine Learning and Applications
(ICMLA), International Conference on. IEEE, 2012.
[3] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin-
ton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information
processing systems, pages 1097–1105, 2012.
[4] Jan Schluter and Sebastian Bock. Improved musical
onset detection with convolutional neural networks.
In International Conference on Acoustics, Speech and
Signal Processing. IEEE, 2014.
[5] Matthew D Zeiler and Rob Fergus. Visualizing and
understanding convolutional networks. In Computer
Vision–ECCV 2014, pages 818–833. Springer, 2014.
The results are demonstrated during the LBD session and on-line at
author’s soundcloud https://soundcloud.com/kchoi-research
An example code of the whole deconvolution procedure is open to
public at https://github.com/gnuchoi/CNNauralisation

Mais conteúdo relacionado

Mais procurados

Teaching Computers to Listen to Music
Teaching Computers to Listen to MusicTeaching Computers to Listen to Music
Teaching Computers to Listen to MusicEric Battenberg
 
A self localization scheme for mobile wireless sensor networks
A self localization scheme for mobile wireless sensor networksA self localization scheme for mobile wireless sensor networks
A self localization scheme for mobile wireless sensor networksambitlick
 
DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...
DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...
DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...csandit
 
Sound event detection using deep neural networks
Sound event detection using deep neural networksSound event detection using deep neural networks
Sound event detection using deep neural networksTELKOMNIKA JOURNAL
 
Kernel analysis of deep networks
Kernel analysis of deep networksKernel analysis of deep networks
Kernel analysis of deep networksBehrang Mehrparvar
 
The Performance Analysis Fiber Optic Dispersion on OFDM-QAM System
The Performance Analysis Fiber Optic Dispersion on OFDM-QAM SystemThe Performance Analysis Fiber Optic Dispersion on OFDM-QAM System
The Performance Analysis Fiber Optic Dispersion on OFDM-QAM Systemnadia abd
 
Methods for interpreting and understanding deep neural networks
Methods for interpreting and understanding deep neural networksMethods for interpreting and understanding deep neural networks
Methods for interpreting and understanding deep neural networksKIMMINHA3
 
Deep Learning Based Voice Activity Detection and Speech Enhancement
Deep Learning Based Voice Activity Detection and Speech EnhancementDeep Learning Based Voice Activity Detection and Speech Enhancement
Deep Learning Based Voice Activity Detection and Speech EnhancementNAVER Engineering
 
Speech Processing with deep learning
Speech Processing  with deep learningSpeech Processing  with deep learning
Speech Processing with deep learningMohamed Essam
 
let's dive to deep learning
let's dive to deep learninglet's dive to deep learning
let's dive to deep learningMohamed Essam
 
Wavelet Based Noise Robust Features for Speaker Recognition
Wavelet Based Noise Robust Features for Speaker RecognitionWavelet Based Noise Robust Features for Speaker Recognition
Wavelet Based Noise Robust Features for Speaker RecognitionCSCJournals
 
Deep Learning in Bio-Medical Imaging
Deep Learning in Bio-Medical ImagingDeep Learning in Bio-Medical Imaging
Deep Learning in Bio-Medical ImagingJoonhyung Lee
 
캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)
캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)
캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)NAVER Engineering
 
improving Profile detection using Deep Learning
improving Profile detection using Deep Learningimproving Profile detection using Deep Learning
improving Profile detection using Deep LearningSahil Kaw
 
Image compression techniques by using wavelet transform
Image compression techniques by using wavelet transformImage compression techniques by using wavelet transform
Image compression techniques by using wavelet transformAlexander Decker
 

Mais procurados (20)

Optical coherence tomography
Optical coherence tomographyOptical coherence tomography
Optical coherence tomography
 
Teaching Computers to Listen to Music
Teaching Computers to Listen to MusicTeaching Computers to Listen to Music
Teaching Computers to Listen to Music
 
A self localization scheme for mobile wireless sensor networks
A self localization scheme for mobile wireless sensor networksA self localization scheme for mobile wireless sensor networks
A self localization scheme for mobile wireless sensor networks
 
DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...
DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...
DIGITAL VIDEO SOURCE IDENTIFICATION BASED ON GREEN-CHANNEL PHOTO RESPONSE NON...
 
Sound event detection using deep neural networks
Sound event detection using deep neural networksSound event detection using deep neural networks
Sound event detection using deep neural networks
 
Kernel analysis of deep networks
Kernel analysis of deep networksKernel analysis of deep networks
Kernel analysis of deep networks
 
The Performance Analysis Fiber Optic Dispersion on OFDM-QAM System
The Performance Analysis Fiber Optic Dispersion on OFDM-QAM SystemThe Performance Analysis Fiber Optic Dispersion on OFDM-QAM System
The Performance Analysis Fiber Optic Dispersion on OFDM-QAM System
 
Methods for interpreting and understanding deep neural networks
Methods for interpreting and understanding deep neural networksMethods for interpreting and understanding deep neural networks
Methods for interpreting and understanding deep neural networks
 
Deep Learning Based Voice Activity Detection and Speech Enhancement
Deep Learning Based Voice Activity Detection and Speech EnhancementDeep Learning Based Voice Activity Detection and Speech Enhancement
Deep Learning Based Voice Activity Detection and Speech Enhancement
 
Speech Processing with deep learning
Speech Processing  with deep learningSpeech Processing  with deep learning
Speech Processing with deep learning
 
CH5
CH5CH5
CH5
 
Ar26272276
Ar26272276Ar26272276
Ar26272276
 
let's dive to deep learning
let's dive to deep learninglet's dive to deep learning
let's dive to deep learning
 
Wavelet Based Noise Robust Features for Speaker Recognition
Wavelet Based Noise Robust Features for Speaker RecognitionWavelet Based Noise Robust Features for Speaker Recognition
Wavelet Based Noise Robust Features for Speaker Recognition
 
Deep Learning in Bio-Medical Imaging
Deep Learning in Bio-Medical ImagingDeep Learning in Bio-Medical Imaging
Deep Learning in Bio-Medical Imaging
 
Tearhertz Sub-Nanometer Sub-Surface Imaging of 2D Materials
Tearhertz Sub-Nanometer Sub-Surface Imaging of 2D MaterialsTearhertz Sub-Nanometer Sub-Surface Imaging of 2D Materials
Tearhertz Sub-Nanometer Sub-Surface Imaging of 2D Materials
 
캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)
캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)
캡슐 네트워크를 이용한 엔드투엔드 음성 단어 인식, 배재성(KAIST 석사과정)
 
improving Profile detection using Deep Learning
improving Profile detection using Deep Learningimproving Profile detection using Deep Learning
improving Profile detection using Deep Learning
 
O026084087
O026084087O026084087
O026084087
 
Image compression techniques by using wavelet transform
Image compression techniques by using wavelet transformImage compression techniques by using wavelet transform
Image compression techniques by using wavelet transform
 

Semelhante a AURALISATION OF DEEP CONVOLUTIONAL NEURAL NETWORKS: LISTENING TO LEARNED FEATURES

International Journal of Biometrics and Bioinformatics(IJBB) Volume (1) Issue...
International Journal of Biometrics and Bioinformatics(IJBB) Volume (1) Issue...International Journal of Biometrics and Bioinformatics(IJBB) Volume (1) Issue...
International Journal of Biometrics and Bioinformatics(IJBB) Volume (1) Issue...CSCJournals
 
Performance analysis of the convolutional recurrent neural network on acousti...
Performance analysis of the convolutional recurrent neural network on acousti...Performance analysis of the convolutional recurrent neural network on acousti...
Performance analysis of the convolutional recurrent neural network on acousti...journalBEEI
 
2933bf63f71e22ee0d6e84792f3fec1a.pdf
2933bf63f71e22ee0d6e84792f3fec1a.pdf2933bf63f71e22ee0d6e84792f3fec1a.pdf
2933bf63f71e22ee0d6e84792f3fec1a.pdfmokamojah
 
IRJET- Music Genre Recognition using Convolution Neural Network
IRJET- Music Genre Recognition using Convolution Neural NetworkIRJET- Music Genre Recognition using Convolution Neural Network
IRJET- Music Genre Recognition using Convolution Neural NetworkIRJET Journal
 
Various Applications of Compressive Sensing in Digital Image Processing: A Su...
Various Applications of Compressive Sensing in Digital Image Processing: A Su...Various Applications of Compressive Sensing in Digital Image Processing: A Su...
Various Applications of Compressive Sensing in Digital Image Processing: A Su...IRJET Journal
 
Sound Source Localization with microphone arrays
Sound Source Localization with microphone arraysSound Source Localization with microphone arrays
Sound Source Localization with microphone arraysRamin Anushiravani
 
Guillet mottin 2005 spie_ 5964
Guillet mottin 2005 spie_ 5964Guillet mottin 2005 spie_ 5964
Guillet mottin 2005 spie_ 5964Stéphane MOTTIN
 
BASIC CONCEPT OF DEEP LEARNING.pptx
BASIC CONCEPT OF DEEP LEARNING.pptxBASIC CONCEPT OF DEEP LEARNING.pptx
BASIC CONCEPT OF DEEP LEARNING.pptxRiteshPandey184067
 
Analysis of Various Image De-Noising Techniques: A Perspective View
Analysis of Various Image De-Noising Techniques: A Perspective ViewAnalysis of Various Image De-Noising Techniques: A Perspective View
Analysis of Various Image De-Noising Techniques: A Perspective Viewijtsrd
 
Recognition of music genres using deep learning.
Recognition of music genres using deep learning.Recognition of music genres using deep learning.
Recognition of music genres using deep learning.IRJET Journal
 
Alternatives to Point-Scan Confocal Microscopy
Alternatives to Point-Scan Confocal MicroscopyAlternatives to Point-Scan Confocal Microscopy
Alternatives to Point-Scan Confocal Microscopymchelen
 
Anvita Eusipco 2004
Anvita Eusipco 2004Anvita Eusipco 2004
Anvita Eusipco 2004guest6e7a1b1
 
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...CSCJournals
 
Novel Approach of Implementing Psychoacoustic model for MPEG-1 Audio
Novel Approach of Implementing Psychoacoustic model for MPEG-1 AudioNovel Approach of Implementing Psychoacoustic model for MPEG-1 Audio
Novel Approach of Implementing Psychoacoustic model for MPEG-1 Audioinventy
 

Semelhante a AURALISATION OF DEEP CONVOLUTIONAL NEURAL NETWORKS: LISTENING TO LEARNED FEATURES (20)

International Journal of Biometrics and Bioinformatics(IJBB) Volume (1) Issue...
International Journal of Biometrics and Bioinformatics(IJBB) Volume (1) Issue...International Journal of Biometrics and Bioinformatics(IJBB) Volume (1) Issue...
International Journal of Biometrics and Bioinformatics(IJBB) Volume (1) Issue...
 
Performance analysis of the convolutional recurrent neural network on acousti...
Performance analysis of the convolutional recurrent neural network on acousti...Performance analysis of the convolutional recurrent neural network on acousti...
Performance analysis of the convolutional recurrent neural network on acousti...
 
1801 1805
1801 18051801 1805
1801 1805
 
1801 1805
1801 18051801 1805
1801 1805
 
2933bf63f71e22ee0d6e84792f3fec1a.pdf
2933bf63f71e22ee0d6e84792f3fec1a.pdf2933bf63f71e22ee0d6e84792f3fec1a.pdf
2933bf63f71e22ee0d6e84792f3fec1a.pdf
 
Ku2518881893
Ku2518881893Ku2518881893
Ku2518881893
 
Ku2518881893
Ku2518881893Ku2518881893
Ku2518881893
 
IRJET- Music Genre Recognition using Convolution Neural Network
IRJET- Music Genre Recognition using Convolution Neural NetworkIRJET- Music Genre Recognition using Convolution Neural Network
IRJET- Music Genre Recognition using Convolution Neural Network
 
Various Applications of Compressive Sensing in Digital Image Processing: A Su...
Various Applications of Compressive Sensing in Digital Image Processing: A Su...Various Applications of Compressive Sensing in Digital Image Processing: A Su...
Various Applications of Compressive Sensing in Digital Image Processing: A Su...
 
Sound Source Localization with microphone arrays
Sound Source Localization with microphone arraysSound Source Localization with microphone arrays
Sound Source Localization with microphone arrays
 
Guillet mottin 2005 spie_ 5964
Guillet mottin 2005 spie_ 5964Guillet mottin 2005 spie_ 5964
Guillet mottin 2005 spie_ 5964
 
BASIC CONCEPT OF DEEP LEARNING.pptx
BASIC CONCEPT OF DEEP LEARNING.pptxBASIC CONCEPT OF DEEP LEARNING.pptx
BASIC CONCEPT OF DEEP LEARNING.pptx
 
Analysis of Various Image De-Noising Techniques: A Perspective View
Analysis of Various Image De-Noising Techniques: A Perspective ViewAnalysis of Various Image De-Noising Techniques: A Perspective View
Analysis of Various Image De-Noising Techniques: A Perspective View
 
Recognition of music genres using deep learning.
Recognition of music genres using deep learning.Recognition of music genres using deep learning.
Recognition of music genres using deep learning.
 
T26123129
T26123129T26123129
T26123129
 
Alternatives to Point-Scan Confocal Microscopy
Alternatives to Point-Scan Confocal MicroscopyAlternatives to Point-Scan Confocal Microscopy
Alternatives to Point-Scan Confocal Microscopy
 
Anvita Eusipco 2004
Anvita Eusipco 2004Anvita Eusipco 2004
Anvita Eusipco 2004
 
Anvita Eusipco 2004
Anvita Eusipco 2004Anvita Eusipco 2004
Anvita Eusipco 2004
 
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
 
Novel Approach of Implementing Psychoacoustic model for MPEG-1 Audio
Novel Approach of Implementing Psychoacoustic model for MPEG-1 AudioNovel Approach of Implementing Psychoacoustic model for MPEG-1 Audio
Novel Approach of Implementing Psychoacoustic model for MPEG-1 Audio
 

Último

[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdfSandro Moreira
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistandanishmna97
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Zilliz
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontologyjohnbeverley2021
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...apidays
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Zilliz
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Angeliki Cooney
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 

Último (20)

[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 

AURALISATION OF DEEP CONVOLUTIONAL NEURAL NETWORKS: LISTENING TO LEARNED FEATURES

  • 1. AURALISATION OF DEEP CONVOLUTIONAL NEURAL NETWORKS: LISTENING TO LEARNED FEATURES Keunwoo Choi Queen Mary University of London keunwoo.choi @qmul.ac.uk, Jeonghee Kim Naver Labs, South Korea jeonghee.kim @navercorp.com George Fazekas Queen Mary University of London g.e.fazekas @qmul.ac.uk Mark Sandler Queen Mary University of London m.sandler @qmul.ac.uk ABSTRACT Deep learning has been actively adopted in the field of mu- sic information retrieval, e.g. genre classification, mood detection, and chord recognition. Deep convolutional neu- ral networks (CNNs), one of the most popular deep learn- ing approach, also have been used for these tasks. How- ever, the process of learning and prediction is little under- stood, particularly when it is applied to spectrograms. We introduce auralisation of CNNs to understand its underly- ing mechanism. 1. INTRODUCTION In the field of computer vision, deep learning approaches become de facto standard after convolutional neural net- works (CNNs) showed break-through results in ImageNet competition in 2012 [3]. It rapidly became popular while the reason of success was not completely understood. One effective way to understand and explain the CNNs was introduced in [5], where the features in deeper levels are visualised by a method called deconvolution. By de- convolving and un-pooling layers, it enables people to see which part of the input image are focused on by each filter. However, spectrograms have not been analysed by this approach. Moreover, it is not clear what can be understood by deconvolving spectrograms. The information from ’see- ing’ a part of spectrograms can be extended by auralising the convolutional filters. In this paper, we introduce the procedure and results of deconvolution of spectrograms. Furthermore, we propose auralisastion of filters by extending it to time-domain re- construction. In Section 2, the background of CNNs and deconvolution are explained. The proposed auralisation method is introduced in Section 3. The results are dis- cussed in Section 4. 2. BACKGROUND The majority of research of CNNs on audio signal uses 2D time-frequency representation as input data, considering it c Keunwoo Choi, Jeonghee Kim, George Fazekas, Mark Sandler. Licensed under a Creative Commons Attribution 4.0 Interna- tional License (CC BY 4.0). Attribution: Keunwoo Choi, Jeonghee Kim, George Fazekas, Mark Sandler. “Auralisation of Deep Convolu- tional Neural Networks: Listening to Learned Features”, 16th Interna- tional Society for Music Information Retrieval Conference, 2015. Figure 1. Deconvolution of CNNs trained for image clas- sification. The filters’ response and a corresponding part in the input images are shown respectively on the left and right for each layer. Image courtesy of [5]. as an image. Various types of representations have been used including Short-time Fourier transform (STFT), Mel- spectrogram and constant-Q transform (CQT). In [4], for example, 80-by-15 (mel-band-by-frame) mel-spectrogram is used with 7-by-3 and 3-by-3 convolutions for onset de- tection and CQT is used with 5-by-25 and 5-by-13 convo- lutions for chord recognition in [2]. Visualisation of CNNs was introduced in [5], which showed how high-level features (postures/objects) are com- bined from low-level features (lines/curves), as illustrated in Figure 1. Visualisation of CNNs helps not only to un- derstand the process inside the black box model, but also to decide hyper-parameters of the networks. For exam- ple, redundancy or deficiency of the capacity of the net- works, which is limited by hyper-parameters such as the number of layers and filters, can be judged by inspecting the learned filters. Network visualisation provides useful information since fine tuning hyper-parameters is among the most crucial factors in designing CNNs, while it is a computationally expensive process. 3. AURALISATION OF FILTERS The spectrograms used in CNNs can be also deconvolved as well. Unlike visual images however, a deconvolved spectrogram does not generally facilitate an intuitive ex- planation. This is because, first, seeing a spectrogram does not necessarily provide clear intuition that is comparable to observing an image. Second, detecting edges of a spectro- gram, which is known to happen at the first layer in CNNs, results in removing components of the spectrograms. This paper has been supported by EPSRC Grant EP/L019981/1, Fus- ing Audio and Semantic Technologies for Intelligent Music Production and Consumption.
  • 2. Figure 2. Filters at the first layer of CNNs trained for genre classification, initial (on the left) and learned filters (on the right). To solve this problem, we propose to reconstruct the audio signal using a process called auralisation. This re- quires an additional stage for inverse-transformation of a deconvolved spectrogram. The phase information is pro- vided by the phase of the original STFT, following the generic approach in spectrogram-based sound source sepa- ration algorithms. STFT is therefore recommended which allows us to later obtain a time-domain signal easily. 4. EXPERIMENTS AND DISCUSSION We implemented a CNNs-based genre classification algo- rithm using a dataset obtained from Naver Music . Three genres (ballade, dance, and hiphop) were classified using 8,000 songs in total. 10 clips of 4 seconds were extracted for each song, generating 80,000 data samples by STFT with 512-point using windowed Fast Fourier Transform and 50% hop size. 6,600/700/700 songs were designated as training/validation/test sets respectively. As a reference of performance, CNNs with 5 layers, 3-by-3 filters, max- pooling with size and stride of 2 was used. This system showed 75% of accuracy with a loss of 0.62. 4.1 Visualisation of First-layer Filters A 4-layer CNNs was built with larger filter sizes at the first layer. Since max-pooling layers are involved with in deeper layers, filters at deeper layers can be only illustrated when input data is given. To obtain a more intuitive visu- alisation, large convolutional filters were used, as shown in [1]. This system showed 76% of accuracy. Figure 2 shows the visualisation of the first layer, which consists of 12-by-12 filters. Filters are initialised with uni- formly distributed random values, which resemble white noise as shown on the left. After training, some of the fil- ters develop patterns that can detect certain shapes. For image classification tasks, the detectors for edges with dif- ferent orientations usually are observed since the outlines of objects are a useful cue for the task. Here however, sev- eral vertical edge detectors are observed as shown on the right. This may be because the networks learn to extract not only edges but more complex patterns from spectro- grams. 4.2 Auralisation of Filters Fig 3 shows original and deconvolved signals and spec- trograms of Bach’s English Suite No. 1: Prelude from http://music.naver.com, a Korean music streaming service Figure 3. Original and deconvolved signals and spectro- grams with ground-truth onsets annotated (on the left) 7th feature at the 1st layer. By listening to these decon- volved signals, it turns out that this feature provides an onset detector-like filter. It can also be explained by vi- sualising the filter. Vertical edge detectors can work as a crude onset detector when it is applied to spectrograms, since rapid change along time-axis will pass the edge de- tector while the sustain parts will be filtered out. There was a different case at deeper layers, where high- level information can be expressed in deconvolved spectro- grams. One of the features at the deepest layer was mostly deactivated by hiphop music. However, not all the features can be easily interpreted by listening to the signal. 5. CONCLUSIONS We introduce auralisation of CNNs, which is an extension to CNNs visualisation. This is done by inverse-transformation of a deconvolved spectrogram to obtain a time-domain au- dio signal. Listening the audio signal enables researchers to understand the mechanism of CNNs when they are ap- plied to spectrograms. Further research will include more in depth interpretation of the learned features. 6. REFERENCES [1] Luiz Gustavo Hafemann. An analysis of deep neural networks for texture classification. 2014. [2] Eric J Humphrey and Juan P Bello. Rethinking au- tomatic chord recognition with convolutional neu- ral networks. In Machine Learning and Applications (ICMLA), International Conference on. IEEE, 2012. [3] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [4] Jan Schluter and Sebastian Bock. Improved musical onset detection with convolutional neural networks. In International Conference on Acoustics, Speech and Signal Processing. IEEE, 2014. [5] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014, pages 818–833. Springer, 2014. The results are demonstrated during the LBD session and on-line at author’s soundcloud https://soundcloud.com/kchoi-research An example code of the whole deconvolution procedure is open to public at https://github.com/gnuchoi/CNNauralisation