Paper given at ‘Diagnoses of Matter and Memory: Bergson and the Problems of Brain, Time and Memory’ 8th International Colloquium of Project Bergson in Japan, taking place in Tokyo and Osaka, Japan, 10th-13th November 2016.
3. Preface
✤ Who am I? - ‘a sociologist of technology and
a philosopher of science’ - a ‘new materialist’?
✤ Theatre -> Arts Management -> Cultural Theory
-> Sociology -> Information Systems ->
Business School -> communications? philosophy?
✤ Book: Bergson and Systems Theory :
✤ Philosophy need neither be the ‘handmaiden of science’ nor
divorced from instrumental reality
✤ Strong emergence through self-organisation but with
‘teleological’ trends: active consciousness both end-point and
inherent in both matter and evolution
4. Introduction
A little background
✤ Artificial Intelligence - 1950s - General A.I. - C3PO / Terminator
✤ Narrow A.I. - 1960s - real practical baby steps…
✤ Machine Learning - 1970s onwards, including ‘neural nets’
✤ Deep Learning - 2010s
✤ 1940s/50s computer engineering (mis)use of language
✤ Algorithm: a flow diagram that represents the bare essentials of what
an engineer can understand and reproduce of a human activity
✤ not the human activity itself
5. Deep Learning
✤ Artificial Neural Networks
✤ 1970s machine learning
✤ Not like biological neurons -
c1011
flexible interconnections
✤ Algorithms failing multiple times in
order to isolate ‘correct’ answer
✤ 2000s: GPUs & social media
✤ Andrew Ng - Google
✤ Layering of artificial neural networks - ‘deep’
✤ Photo-‘recognition’ - cats in YouTube videos / self-driving cars?
http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html
7. Human Learning
Bergson, Memory and Learning
✤ Two kinds of memory, two kinds of learning
✤ Human memory is so much more than storage: it is a conscious
experience of time
✤ Algorithms can only match what is in front of them with what is
stored within them
✤ The logfile of an algorithm exists continuously, stored as tokens in
the ‘now’ of the computer storage
✤ Memory recalls each repetition of learning a lesson as an
individual moment in personal history
✤ Algorithmic ‘memory’ is Bergsonian ‘habit’ - an action that is a
lesson learnt
✤ Humans are aware in time: computers are not
✤ “I study a lesson… and
in order to learn it by
heart I read it a first-
time, accentuating
every line … it is said
that I know my lesson
by heart, that it is
imprinted on my
memory” : “Each
several reading then
recurs to me with its
own individuality…
each reading stands
out before my mind as
a definite event in my
history” (Bergson
1908:89).
8. Human Learning
Libet, Bergson, and the brain
✤ Benjamin Libet Mind Time - experiments directly stimulating areas of the
brain during neurosurgery
✤ Awareness takes place 0.5secs after stimulation/physical sensation
✤ Micro-differences in time taken from feet and hands to brain also erased
✤ Awareness is a composite of different moments all somewhere around
0.5secs after the event : “awareness itself is a mental phenomenon
separate from the content of a mental event” (Libet 2005:56).
✤ Often interpreted to support supervenience of physics over conscious will
✤ On the contrary - Libet is clear there is a “conscious veto” - Bergson is
clear that the ‘now’ we experience is all too often in the past and that free
will is precisely that veto - the ability to intervene in automaticity and
choose
✤ Plus - deliberative consciousness occurs over much longer time periods
than the perception-action flow of physical stimuli measured by Libet
✤ “Practically we
perceive only the past,
the pure present being
the invisible progress
of the past gnawing
into the future.
Consciousness, then,
illumines, at each
moment of time, [an]
immediate part of the
past” (Bergson
1920:194).
“Memory… imports the
past into the present,
contracts into a single
intuition many
moments of duration”
(Bergson 1920:80)
https://thehumanevolutionblog.com/2015/01/07/is-violence-what-made-humans-smarter-than-other-animals/
9. Artificial Intelligence
Turing and intelligent machines
✤ Alan Turing (1950) “Computing Machinery and Intelligence”
Mind LIX (236): 433–460 : ‘Can machines think?’
✤ Imitation game: A(male) B(female) / C must determine which
is which; A must try to deceive C.
✤ Dismisses ‘mind’ and ‘think’ as undefinable, replaces
question with ‘Are there imaginable digital computers which
would do well [as A] in the imitation game?’
✤ Sleight of hand creates ‘human computers’ that digital
computers may mimic
✤ But - as he says - digital computers are ‘discrete state
machines’ defined by programmable Laplacian rules: initial
states determine outcomes
• Hayes and Ford (1995) - Turing
Test is harmful
• Dennett (1985) and Floridi et.
al. (2008) - it is simply too hard.
• Block (1981) - intensely
programmed ‘Blockhead’
capable of a myriad special
responses but still neither
intelligent nor possessing ‘mind’
• Searle (1981) questions whole
notion that machines could
think
http://www.rutherfordjournal.org/article040101.html
10. Artificial Intelligence
Complexity, novelty and consciousness
✤ Contemporary complexity theory discounts Laplacian
model : 50/50 bifurcations/self-organisation/quantum
model - means probabilities is all there are; strong
emergence: no ‘discrete state machines’ except
computers (Prigogine and Stengers 1985)
✤ Origination, creativity, innovation - Turing describes as
mere ‘surprise’ : the Laplacian view
✤ For Bergson novelty is key : humans as ‘centres of
action’ able to make choices - the ‘conscious veto’ in
duration
✤ Turing’s only answer to this seems to be that “there is
nothing new under the sun” and that originality does not
exist (Turing 1950:450)
• Addressing ‘objections’ Turing
is dismissive of ‘consciousness’
as solipsism
• “I think that most of those who
support the argument from
consciousness could be
persuaded to abandon it rather
than be forced into the solipsist
position.” (Turing 1950:447).
• Strong emergence: conscious
beings arising from self-
organisation who can make
changes to the conditions from
which they arose
http://www.rutherfordjournal.org/article040101.html
11. ✤ Dissipative structures : e.g. tap running into a bath; water forms a spiral
at open plughole. Could be oil or wine - the dissipative structure would be
the same (Goodwin 1994:9-10)
✤ Structure independent of contents: exists only as they pass through it
✤ Could quantum computing incorporate the complexity of bifurcation
✤ Complex ‘open state’ computing generate dissipative structures?
✤ Could one/combination of dissipative structures constitute/generate
‘awareness?’
✤ Awareness, or consciousness required if, beyond data, storage, and
pattern matching, we are to consider information, memory, recognition or
learning.
✤ Conclusion: We cannot rule out the possibility of machines having
memory, information, learning, or intelligence. But we can be confident
that those digital computer engineering tools we already do have fall far
short of deserving the use of such words.
The future
Artificial intelligence beyond discrete digital state
machines?
• ‘Deep learning’ fad may already
be over : eCommerce giants
rush to hire curators
• Amazon bought Goodreads, a
website based around personal
book reviews
• Spotify have expanded their
range of human playlist makers.
• Netflix actively train their users
to tag their movies
• Facebook in trouble because of
perceived bias amongst their
human news curators, but
falling back upon algorithms for
their news feed has by contrast
brought numerous examples of
‘fake news’ to the ‘Trending’
feature – in trouble now as
perhaps contributing to Trump’s
success
• Samsung’s news app divides
into what you want to know and
need to know; the former
chosen by algorithm, the latter
by editors
12. Contact
✤ Dr David Kreps
✤ http://david.kreps.org
✤ www.creativeemergence.info
✤ d.g.kreps@salford.ac.uk