A Mobile Audio Synthesiser
Sean William Rooney
Final Year Project – 4th
B.Sc. Single Honours in
Department of Computer Science
Maynooth, Co. Kildare
A thesis submitted in partial fulfilment of the requirements for the
B.Sc. Single Honours in Computer Science.
Supervisor: Thomas Lysaght
The objective of this project was to design and develop an android mobile application which
generates and controls various audio synthesis techniques as per the project specification. This
project will build an Android Mobile Application to generate and control various audio synthesis
techniques such as delay, FM, AM, and ring modulation. Various filtering techniques associated
with synthesisers should be implemented to shape the sound such as, low pass, band pass, as
well as delay and reverb. The interface should be modelled using typical audio synthesis widgets
and should scale to different mobile devices. I chose this particular project based on my interest
in music production and mobile application development. My familiarity with the Java language
also motivated me to choose this project. I primarily used Android Studio to create the app as
opposed to Eclipse, due to the former now being the official IDE for Android. Using custom
widgets, buttons, and rotary knobs resulted in an appealing UI, through which, values were read
and used to manipulate the various audio synthesis techniques as required. Some problems
were encountered during development, some were overcome. However, I could not overcome
some problems prior to the project deadline. In conclusion, I created a mobile audio synthesiser
featuring most of the required synthesis techniques and despite the deadline, I will continue to
build upon the app until I am satisfied it is complete.
Keywords: Android, modulation, audio synthesis, mobile application.
The topic being addressed in this project is a mobile app whichcan synthesise sound, as well as
modulate and shape this sound to the users liking. This mobile app willoperate on the android
platform and willscale to other android mobile devices. It will perform the same functions as a
physical synthesiser with the benefit of being much more compactand ‘mobile’.
My motivationwhen taking on this project revolvesmainly around my interest in sound
production, music production and audio effects.I am familiar with various FM synthesisers such
as FM8 and Massive, both made by Native Instruments. Using these synthesisers, I learned to
create, shape and model various sounds using ADSR (Attack, Decay,Sustain and Release), filter
envelopes, LFO’s and band pass filters.
Other than already having an interest in audio synthesis, I believe that having experience
in mobile development is becoming more relevant in present times. The variety of apps now
available for mobile devices is beginning to affectthe relevance of desktop machines. I am
personally observing a trend leaning towards mobile platforms as opposed to desktop in the
context of device usage as a workstation. If the processing power within mobile devices
continues to increase over the next few years at the rate it currently is, desktop platforms could
become obsolete. If youtake into account,a person’s busy lifestyle in current times, along with
the compact portability of a mobile phone or tablet, they make foran ideal combination
resulting in efficiency and productivity.
Re-enforcing my beliefs regarding the importance of knowledge in mobile app
development is a report on the U.S. mobile app. The report, “TheU.S. Mobile App Report”
researched by Adam Lella, and Andrew Lipsman of comScore.com,states “Thedays of desktop
dominance are over”.[www.comscore.com/Insights/Presentations-and-Whitepapers/2014/The-
US-Mobile-App-Report Accessedon:22/03/16,Publishedon:21/08/14,Author: AndrewLipsman]
I need to synthesise audio. ThereforeI first need to be able to play audio through the android
devices speakers. To do this, I willneed to create an Android Audio Track and send an array of
type ‘short’ to this Audio Track.For the Audio Track I willneed to specify a sample rate and for
the array, I willneed to find the minimum buffersize of the Audio Track, this will be the size of
the array. Tohear any sound when the Audio Trackis playing, I will need to fillthe array with a
sine or cosine wave at a certain phase, or some variation of these waves, depending on the
desired sound, cast to a short.
The effectsand wavetypes selected by the user will affecthow the array is filled;
therefore I will need to create a function whichdeals with all these possible parameters. Keeping
in mind that the project specifies that the app must scale to other mobile devices, I may not be
able to fitthe entire app onto a single screen; therefore I will need to create one or more intents
and activities whichdeal withmultiple screens. As well as including the effects and modulations
as per the project specification,I need to provide controls to the user, so that they may
manipulate these effectsand modulations as they wish and change the amount of an effect
applied. Depending on the capabilities of the Android deviceand the limitations of the Java
Development kit, I may need to deal withanti-aliasing and audio stutter, particularly when
trying to implement the delay effect.
I approached the initial problem of synthesising sound by first importing android media
packages. I imported AudioTrack, AudioFormat and AudioManager. With these packages
imported into the project,I was able to create an instance of an AudioTrack whichI named
trackOne. I declared this trackOnewithin a thread named “t”.There are fourfunctions
associated with AudioTrackwhichI used in concurrencewith trackOne, play(),write(),stop()
and release(). Immediately after declaring and initialising trackOne,I called trackOne.play().
This will begin to play that which has been written to trackOne, though at this point in my
thread, nothing has yet been written to trackOne.
To hear a sound while trackOne.play() Iwould need to write something to the track. The
write functionassociated with the track requires an array of short values I would callsamples
and a specified buffer size. How the array of samples is filled woulddepend on the sound I wish
to produce, whichis determined by the buttons and knobs on the screen. There are multiple
possibilities regarding whichsound could be produced. I wouldtherefore create multiple
functions to fillthe array of samples in different ways depending on the desired sound. For
example, a functionto fill the array with a sine wave wouldconsist of the following;A ‘for’ loop
whichwould run froman integer i = 0 up to i < the buffersize, whichis the size of the array.
Within this loop, I wouldset the sample array at position I to be filled with the following
function;“amplitude * cos(ph)” whereph represents the phase of the waveand at this point is 0.
Amplitude is an integer which is set at 10000 and its value can be modified by a rotary knob on
the screen. After this line is executed, the value of ph is incremented as such;
ph = ph + 2 * pi * frequency/sample rate.
This function will run within a while loop, and the array of samples willcontinue to be
written to trackOne until the loopis broken. This wouldbe achieved by a boolean whichI would
call isRunning. This boolean is set to true while a button is held down on screen, so during this
time, the array of samples will be filled as per a specified function and written to trackOne. The
boolean is set to false once the button is released which breaks the while loop.
Upon the loop being exited, the followingtwofunctions to run are stop() and release().
trackOne.stop()will cease any sound from playing and trackOne.release() will empty the track.
In order to determine whichfunction should run to fill the array of samples in the desired
manner, corresponding boolean values wouldbe required. If a particular switch is turned on, its
corresponding boolean value would be changed to true and thus, an if statement in the
aforementioned while loop woulddetermine whichmethod would be run to fill the samples
array. For example, if I were to turn a switchlabelled “Ring Mod” to the on position, a boolean
value called ringMod wouldbe changed to true from false. Then when the sound is played and
the while loop is entered, an ‘if’ statement is encountered which states if ringMod is true then
run the function that fills the array with the ring modulator samples.
My approach forthe UI,which allows the user to select and controla sound or effect,revolves
around various widgets in the form of image buttons, as well as, a custom widget called a rotor
knob. The aforementioned boolean values wouldbe controlledby various switches placed about
the UI in an ergonomic fashion. These switches would be made from Android image buttons,
buttons whichcarry out a functionupon being clicked.This button being an image button means
that its appearance is determined by a .png image which is set as its background; this allows you
to make the button look howeveryoulike. As well as executing a functionupon being clicked,I
would calla command which changes the background image of an off button to an on button,
and viceversa when clicked again.
In order to controlthe amount of an effectbeing used I would need to implement a
custom widget called a rotary knob. This wouldbe a widget that can rotate when dragged to the
left, right, up or down, similar to the functionality of a potentiometer. I would limit the knob to
rotate between 0 and 290 degrees and map values between 0 and 100 to be outputted relative to
the angle of rotation. I would use a variation of this knob to select which waveformis being used.
As opposed to mapping values 0 to 100 onto the output angle, I wouldset the knob to output
string values depending on which range of angles the knob is pointing. For example if the
current angle is between 0 and 50, the string “Sine” would be outputted, and I wouldmap
Square, Saw and Triangle to other angle ranges.
Two‘if’ statements would ensure that the rotary or clickknob would not go beyond the
range of 0 or 290 degrees. If the angle would ever go below 0 or above 290, a line of codewould
execute to set the angle to 0 or 290 accordingly.
In order to run the codeon a device and test its functionality,I woulduse the Android Virtual
Devicewhich is built into Android Studio, making code testing on a devicevery handy. The
virtual device can emulate any android device with any size screen running any version of the
android OS from Cupcake all the way up o Marshmallow and beyond. However, in order to take
advantage of the “Intel x86 Emulator Accelerator”,a computer with an Intel processor is
required. Unfortunately the computer I am using to create the application is running an AMD
processing unit and I am unable to take advantage of the accelerator.As a result, using the
Android Virtual Deviceis an unbearably slow process and therefore unfeasible considering how
often I wouldneed to checkthe stability of my code. With this in mind, along withintentions to
learn more about mobile development in the future, I decided to purchase an Android Phone, an
LG G3 running Android Lollipop. The devicewould be perfect for testing the functionality of the
code and I couldstill use the Virtual Device at a later stage in development to checkhow the
application scales to other devices.
To allow USB debugging on the deviceI would need to go into the settings menu and
enable developer options (forthis option to become available, one needs to first enter the
softwareinfo menu and tap the build number 7 times). After this I would enter the new
“DeveloperOptions” menu and turn on USB debugging. My Android deviceis now recognised
within Android Studio and can select it as a deviceto run the code on while in development.
Once the testing device is set up to be recognised by the computer, I would begin testing the
code. Upon running the code, I am greeted with UIon the devicewhich I wouldfirst compare to
the preview layout within Android Studio. This stage of evaluation is to ensure the layout of the
app is both ergonomic and user friendly. When implementing value mapping into one of the
various functions of the app, I wouldinclude a text box in the UI whichwould display the
mapped values. I wouldobserve this text box when testing the app to ensure the values are
mapping correctly and then remove the text box later or use it for other value mappings.
Next we have evaluation of the sound. At this stage I would listen to the sound playback
with each of the various modulations and wavetypes selected, and then compare these to the
corresponding functions found on a well-made reputable audio synthesiser found on the
Android Play Store. The app I mainly used for sound comparison was “Common FM synthesiser”
Before this project, i had never developed an application fora mobile device or studied sound
synthesis nor any associated modulation techniques. Now that I am nearing the projectdeadline,
I can create and run a stable, functional,user friendly mobile app that willscale to other mobile
devices. I now have the tools and skills necessary to continue developing forthe mobile
platform, as wellas the tools required to expand development beyond the mobile platform, to
the wearable technology platform (Smartwatch, Google Glass etc.).
I now have a much deeper knowledge into the math and programming behind sound
synthesis. I now know how to create functionsto generate various sound waves,how to
manipulate a sine wavein order to achieve various waveforms, for example, creating a square
waveby means of a Fourier Transform. Priorto taking on this project,I was familiar with using a
sound synthesiser and its associated effects,but I now know how to create these effectssuch as
the ring modulator, frequency modulator or a band pass filter forexample.
Taking on this project also gave me the opportunity to learn about creating a stable and
user friendly UIon a mobile device. Android Studio creates its widgets and layout using XML
whichI was somewhat familiar with prior to the projecthaving created style sheets for a HTML
webpage in the past. I was able to create a custom widget from scratchwhich workedperfectly
and I was able to manipulate it to behave in different waysas I needed it to. I was able to
improve my Adobe Photoshop skills as I used this resource extensively through the course of
creating my UI. I used it to create backgrounds and create second instances of buttons and
switches to use to appropriately change the appearance of a switch when flickedor a button
Finally, I learned a great deal about the Android NDK (NativeDevelopment Kit) and the
JNI(Java Native Interface).Using these tools I was able to code a C++ file and its associated
header file, and then call it within a java class file. I used this method when trying to implement a
Delay functioninto the application due to issues occurring when trying to program the same
functionin java. As a result, I was able to make use of my knowledge of the C++ language as well
as Javaand XML.Programming a Delay effectin C++ would avoid risk of audio stutter and anti-
aliasing while streaming the audio.
[“The Audio Programming Blog”, Author: “VictorLazzarini”,
https://audioprograming.wordpress.com/catagory/android/ Accessed on: 24/03/2016]
This blog by Victor provided me witha lot of insight into the Android NDK and how it could be
used with audio synthesis, as well as providing examples of how a synthesiser couldbe
programmed using Javaand the Android SDK. In some of his blogs, Victor explores the problem
of latency with audio development on Android. He details how he overcame the latency issue by
taking advantage of the NDK and wrapping C++ codein Javathen published his results and
findings onto the blog. Overthe course of the projectI attempted to implement Victor’s solutions
into my application in an effortto constructa workingdelay function.
Android Studio, The IDE itself contains a help section and prompts you witha tip of the day each
time youopen it. These small pieces of information helped me to familiarise myself with the
navigation of Android Studio, customise the look and layout,as well as understand how Android
Studio differsfrom MicrosoftVisual Studio.
[“Android Studio Development Essentials – Second Edition”,Author: “Neil Smyth”]
As this is my first endeavour into the world of development on the mobile platform, this book
served as a very helpful starting point. The goal of the book is to teach the skills necessary to
develop Android based applications using the Android Studio IDE and the Android SDK.
Combining this with my prior knowledge and experience in java programming, I foundit very
easy to begin creating and testing my first applications.
[“The Android Developers Cookbook - Building Applications withthe Android SDK”, Authors:
“James Steele and Nelson To”]
This is a 355 page book whichdiscusses the main conceptof Android OS and creating
applications forit using the Android SDK. The bookgoes into great detail on these topics. I
downloaded this bookas a PDF and referenced it many times throughout the course of creating
the application. This material makes a great addition to the aforementioned “Android Studio
Development Essentials” as required in area that needed to be discussed in more depth. From
this bookI also learned a great deal about the Android OS as it is and how it came to be, i.e. its
evolution, its structure, how a deviceruns the OS etc.
Maynooth University Moodle Resources: Uponchoosing this project, my supervisor Thomas
Lysaght granted me access to the “CS322 – Music Programming” and “CS385 – Mobile
Application Development” modules on Moodle. The notes provided here from both Thomas
Lysaght and Joseph Timoney provided me witha much better understanding of how music and
audio is programmed, and gave me a great foundation to begin my approach. The lecture note
PDFsalong with the Java codesamples helped me to build an initial application whichI could
The user should be able to create and play a sound withease, such that a minimal amount of
steps between starting the app and hearing the sound is preferred. A simply Activity Diagram
can be seen in [appendix A] in the attachments folder. Here you can see the user has a choice
between creating and playing a sound on the first tab of the application, or clickingto the next
tab to build more upon the sound, and then play it.
Generating the sound is a problem that wouldrequire a main run function in the code.
This function could take in various boolean and float or double values whichwould determine
how the note sounds. The user requires a button to play the note, a means of controlling the
frequency and amplitude and switches to toggle various effectssuch as ring modulation and
frequency modulation. After setting up the desired sound, applying effects,specifying a wave
shape, the user would need a screen where they could easily sample the sound at different
frequencies/pitches. This screen should allow at least a fulloctaveof notes so the user may hear
an entire range of the sound.
The four basic waveshapes that can be generated by a synthesiser willneed to be recreated in
this application. Studying the resources available to me in the CS322 – Music Programming
module on Moodle, I was able tocome up with Pseudo codefor the Sine, Saw and Triangle and
Square waveforms.I took record of these forlater use and can be seen in [appendix B] in the
attachments folder. These are the equations I would need to fill an array with and load into the
Audio track to play through the speakers.
The size of the device’s screen could be seen as somewhat of a limitation when thinking
about fitting the required toggles, buttons and switches onto it. Multiple Activities/Classes
would be required for both unction and aesthetic purposes. The problem of maintaining a
variables value when moving from one Activity to another will need to be handled so as not to
lose the sound the user has created when clicking onto the next screen. For example, if I start up
the application and I choosethe wavetype on the first page to be a Saw wave,I need the Boolean
value saw to remain “true” when I start the next activity as I move to the next page.
The problem of latency with audio development on android willneed to be addressed as to
maintain a quality performance throughout the app. Research into this has led me to believe the
best course of action is to use the Android NDK and JNIwhen programming some of the more
processor demanding sound functions. For this I would need to download the NDK. Although it
is still an experimental feature, this method of programming for Android should result in the
Anti-aliasing is another issue with audio development that could present over the course
of this project. To solvethis I may need to experiment with the sampling rate and the buffersize
I use with the Audio Track. The sound may crackand stutter if the sample rate is too high and
thus must be examined carefully as I test the audio output.
The Solution (Design and Implementation)
Various mathematical equations are used to make different wave types, and modulation effects.
The pseudo code of the equations forthe fourwave types can be seen in [appendix B] in the
attachments folder and the pseudo code forfrequency,amplitude and ring modulation can be
found in [appendix D] in the same folder. Creating a ‘for’ loop whichiterates from 0 to the
previously declared buffer size, will be used alongside these equations to fill the array fora
The solution to having a streamlined, user friendly experience when using the application is a
well laid out and ergonomic design. Initial sketches forthe layoutof the applications 3 Activities
can be seen in [appendix C] in the attachments folder. In these sketches it can be seen how the
streamlined 3 step process is achieved. Youare greeted with the main activity where youchoose
your waveform, set the frequency and amplitude, and chooseif you wish to apply any
modulation types and a button to sample the sound. The user can then clickthe tab arrow to be
brought to the next activity wherethey can specify the ADSR values and clicka toggle button if
they wish to apply an additional effectwiththe sample button at the bottom.
The user can then use the toggle arrows to either go to the previous activity or go to the
third and final activity.This activity is designed so that the user may extensively sample and test
the sound through a full octaveof frequencies. In the centre of the Activity the user can see
whichnote they are playing accordingto the frequency of the note.
The layout of each page will be designed to be familiar as wellas ergonomic. The button
for sampling will be thoughtfully placed at the bottom centre of the page so that when holding
the mobile device, the users thumb willbe able to easily reach, it will feel natural. The buttons to
switch whichActivity youare currently in willbe placed at the top of the screen in a similar
fashion to that of tabs on an internet browser or the forwardand back buttons in a windows
Determining which equation to fill the array withwill be determined by boolean values, these
boolean values willbe global and named according to the wave typeor modulation function they
represent. All of these values willbe initialised and all but the sine boolean value will be false.
Multiple functionswill be written in the first Activities’ class, they will also be named according
to the wave types and modulations and contain the corresponding equations. The input
parameters to these functionswill be the sample array and the global frequency of the
application. The “playNote” functionwill be called upon a press of the button, and the sample
array will be passed to one of the aforementioned functionsto be filled with relevant equation.
Whichfunction the array is passed to willbe determined by if statements and the boolean
values, as twowave formboolean values will never be set to true at the same time and the
modulation functionswill override the waveform functions.
To solve the problem of audio latency on Android, I will attempt to implement the NDK
into my project when designing the delay effect.I will create a C class containing the functions to
produce the delay, I willthen call the native Java functionin my main Activity and apply the
result to the current Audio Trackand sample array.
In order to create the custom widgets I willuse throughout the application, I willuse Adobe
Photoshop extensively throughout the course of development, creating .png images with
transparent backgrounds to replicate the lookof the sketches in [appendix C]. If I take an image
of a round red button, and darken the red area, the button in the image will lookas if it has been
pressed. Uponstarting an Activity,the first red button can be seen. While the button is being
touched on screen I willchange its background image to the edited darker version and backto
the original when the button is released. It is impossible foran android application to replicate
the responsive, tactile feel of an actual synthesiser. However,I believe this extra level of detail,
despite its subtlety, adds a level of immersion that will make the user feel less detached from the
creative process of sound generation. Perhaps in the future I will add more detail, such as a
clickingsound when switching on one of the modulation function switches.
The result of the equations detailed in [appendix B] and [appendix D] willbe cast to a short and
will be therefore eligible to be saved in the array of type short called samples. The iterations of
the ‘for’ loop in whichthe array is filled, are constrained by the buffersize and thus impervious
I have created a UMLdiagram coveringthe general layout and navigation of the application,
taking into accounteach option fornavigation fromeach Activity.Each course of action is
covered;this diagram can be found in [appendix F] in the attachments folder. During testing I
shaped a sound by assigning values to variables before navigating to the next Activity and then
checkedto see if the variables were still stored. The values stored were maintained as the
application was navigated extensively, back and forththrough the Activities with no loss of data.
From the screenshots provided in [appendix E] in the attachments folder it can be seen
the final layoutand design of the application is fairly accurately replicated from the initial
sketches. Both the Javacode and XMLlayout code I wroteseems to be working properly in
accordancewith my proposed solutions.
Testing was carried out by loading a debug version of the application onto my testing device,the
LG G3 5.5 inch phone running Android Lollipop and assessing the stability of the applications
functions from there. I can easily and quickly make small changes to the code and reload the
debug onto the device and checkthe changes.
Testing the accuracy of the sound being generated from the waveforms involved first running
the debug version of the application on the test device, selecting the waveform I’d like to test,
and then comparing the sound to the same waveformgenerated by a popular, renowned
synthesiser application. The application I used as a control formy sound accuracy testing was
“Massive” by “Native Instruments” or “N.I.”.From the screenshot in [appendix G] in the
attachments folder, it can be seen that there is a vast amount of options within “Massive”, which
allowed me to have many variations on my test. Using Oscillator 1, I picked a waveformand
played it, then compared it to the same waveformon my testing device.
For testing the overallstability of the application, I put the app under various “stress tests”.
These tests involvedmy trying to crash the app, and checking the error log when I am successful
and making necessary changes to the code in order to prevent this cash in the future. For
example, quickly mashing the playNote button would result in the sound anti-aliasing and would
output a very unpleasant tone. Quickly rotating the deviceand rotating backwould crash the
app, only if device rotation is enabled.
After attempting to implement a native java function fromthe NDK whichwould
generate an audio delay effect,my tests were unsuccessful and the presence of this codewithin
the project began to affectits stability and as a result, was removed.
Testing revealed that values assigned to variables were not being maintained throughout the
activities as I switchedbetween them. Upon closer investigation, I found that I was missing a line
of code. Where I was loading the variables I wished to carry into a bundle, I did not put the
bundle into the intent.
It was also revealed that the tremolo functionwas increasing the frequency by the
desired pitch on each loop but not decreasing it backto the original value. This resulted in the
frequency of the note increasing infinitely as I held downthe play button.
Though the application seemed mostly stable, there was still workneeded to be done on some of
After extensive testing, the resulting application whichwas designed from my solution was fully
functioning with most of the required sound synthesis functions. The comparison of sounds
between my application and the controlapplication “Massive” was surprising as the sounds
were similar to an acceptable degree withoutany alteration. As a result, it can be determined,
that the equations forthe waveform functions and modulations presented in my solution, were
successful in their attempt to create their corresponding sounds to an acceptable accuracy.
After making necessary changes to how the variables were dealt with upon switching
activities, the application successfully retained all its variable information regardless of how
many times the Activitieswere switched.
Despite having difficulties implementing the native Java functionwith the NDK to handle the
delay effect,I was still able to set up and implement a simple JNIfunction. I created a text widget
on the third activity page, whichhas its text set within the onCreate function of the Activity.The
text which it is set to originates froma C file whichhas a functioncalled stringFromJNI and upon
running the app it can be clearly seen in the third Activity,that the line of text written in the C
file, has been written to the text widget at the bottom of the screen.
The results have been incredible informativeand have allowed me to correcterrors were there
once was, and improve aspects of the app in areas that were lacking. Various instances of a
possible crash or loss of data when navigating the application have been eliminated, resulting in
a much more stable app.
The resulting application is, forme, a stepping stone towards further work in the area of mobile
application development. Taking on this project has increased my already peaked interest in
development forthis platform.
My results fromthis project have confirmed that there is indeed a latency issue withaudio
programming on the Android platform. A combination of anti-aliasing, a delayed response from
the play button and the apps behaviour when you try generating a note quickly after a sound has
ended are all evidence of the presence of this issue.
While viewing C code within the Android Studio and that code is connected to a native Java
functionin the project, you are constantly prompted with the notice that Android NDK is still an
experimental feature. Despite some difficulties, I was able to utilise the NDK and call a native
Javafunction and as a result, I can confirmthat this experimental feature is functional.The NDK
can be very useful and broaden the spectrum of tools available to youwithin Android Studio,
and though my current example of NDK utilisation may be quite basic, I will be studying and
experimenting with the NDK in much more depth in the future.
In the future I willcontinue tobuild upon this application until I am satisfied at its stability and
functionality.I would like to add more audio synthesis functions to give the user as much
freedom and as many options as possible.
I will continue to workwith Android Studio and attempt to develop useful apps, learn
more about the IDE and see how far I can push a mobile device in the context of how much a
mobile application can achieve when looked at against a desktop machine. I have learned much
about the Android OS, the Java Language and the Android SDK throughout the course of this
project and am eager to learn more, and to create more.
The U.S. Mobile App Report
Accessed on: 22/03/16, Published on: 21/08/14, Author: Andrew Lipsman]
The Audio Programming Blog
“The Audio Programming Blog”, Author: “Victor Lazzarini”, Accessed on: 24/03/2016]
The Android Studio IDE
[http://developer.android.com/sdk/index.html Accessed on: 30/03/2016]
Android Studio Development Essentials
[“Android Studio Development Essentials – Second Edition”, Author: “Neil Smyth”]
The Android Developers Cookbook
[“The Android Developers Cookbook - Building Applications with the Android SDK”, Authors:
“James Steele and Nelson To”]
Appendices are provided in the attachments folder of the supporting documents.
If I were to repeat my attempt at this project, I wouldput more focus on the NDK and wrapping
C code in a native java function, due to the factthe audio is handled more efficiently in C.
Parece que tem um bloqueador de anúncios ativo. Ao listar o SlideShare no seu bloqueador de anúncios, está a apoiar a nossa comunidade de criadores de conteúdo.
Atualizámos a nossa política de privacidade.
Atualizámos a nossa política de privacidade de modo a estarmos em conformidade com os regulamentos de privacidade em constante mutação a nível mundial e para lhe fornecer uma visão sobre as formas limitadas de utilização dos seus dados.
Pode ler os detalhes abaixo. Ao aceitar, está a concordar com a política de privacidade atualizada.