In a clear way, I outline how Supermathematics may apply in Artificial General Intelligence.
I describe standard Super-Hamiltonian usage, with respect to Dwave's "Quantum Boltzmann Machine".
2. 2
Supermathematics and Artificial General
Intelligence
Jordan Micah Bennett1
Abstract
I clearly unravel how I came to invent the supermanifold hypothesis in
deep learning, (a component in another description called 'thought
curvature') in relation to quantum computation.
Folioverse.appspot.com | email: jordanmicahbennett@uwimona.edu.jm | jordanmicahbennett@gmail.com | September 5, 2017
3. 3
Here goes….
This text concerns attempts to construct artificial general intelligence,
which I often underline may likely be mankind's last invention.
If you have good knowledge of supermathematics and machine learning,
you may pitch in for a discussion, by messaging me at
jordanmicahbennett@gmail.com.
4. 4
Part A - Babies know physics, plus they
learn
Back in 2016, I read somewhere that babies know some physics
intuitively.
Also, it is empirically observable that babies use that intuition to develop
abstractions of knowledge, in a reinforcement learning like manner.
Part B - Algorithms for reinforcement
learning and physics
Now, I knew beforehand of two types of major deep learning models,
that:
(1) used reinforcement learning. (Deepmind Atari q)
(2) learn laws of physics. (Uetorch)
However:
(a) Object detectors like (2) use something called pooling to gain
translation invariance over objects, so that the model learns regardless of
where the object in the image is positioned.
(b) Instead, (1) excludes pooling, because (1) requires translation variance,
in order for Q learning to apply on the changing positions of the objects in
pixels.
5. 5
Part C – I sought to create…
As a result, I sought a model that could deliver both translation invariance
and variance at the same time, and reasonably, part of the solution was
models that disentangled factors of variation, i.e. manifold learning
frameworks.
I didn't stop my scientific thinking at manifold learning though.
Given that cognitive science may be used to constrain machine learning
models (similar to how firms like Deepmind often use cognitive science as
a boundary on the deep learning models they produce) I sought to create a
disentangleable model that was as constrained by cognitive science, as far
as algebra would permit.
6. 6
Part D – What I did to approach the
problem...
As a result I created something called the supermanifold hypothesis in
deep learning. (A part of a system called 'thought curvature').
This was due to evidence of supersymmetry in cognitive science; I
compacted machine learning related algebra for disentangling, in the
regime of supermanifolds. This could be seen as an extension of manifold
learning in artificial intelligence.
Given that the supermanifold hypothesis compounds ϕ(x,θ,θ)T
w here is
an annotation of the hypothesis:
i. Deep learning entails ϕ(x;θ)T
w, that denotes the input space x, and
learnt representations θ.
ii. Deep learning underlines that coordinates or latent spaces in the
manifold framework, are learnt features/representations, or
directions that are sparse configurations of coordinates.
iii. Supermathematics entails (x,θ,θ), that denotes some x valued
coordinate distribution, and by extension, directions that
compact coordinates via θ, θ.
iv. As such, the aforesaid (x,θ,θ), is subject to coordinate
transformation.
v. Thereafter i, ii, iii, iv and [supersymmetry in cognitive science],
within the generalizable nature of euclidean space, reasonably
effectuate ϕ(x,θ,θ)T
w.
7. 7
Part E – A probable Experiment: A
Transverse Field Spin (Super)-
Hamiltonian Quantum Computation
Considering the Bessel aligned second-order linear damping equation: 𝜑̃ = (z +
1/λ)1/2
[C1I√5/2(α(z + 1/λ)) + C2 I −√5/2(α(z + 1/λ))]eµz [12]
, constrained in the
Montroll potential uM(ξ)[12]
, via Zλ, given that any SO(n) group is reducible to
𝑆𝑈(𝑛) typically SU(2) [16]
; within the aforesaid constraint, the Hamiltonian
operator: − ∑ Γ𝑎 a σ
𝑥
𝑎
− ∑ 𝑏𝑎 a σ
𝑧
𝑎
− ∑ w𝑎,𝑏 ab σ
𝑧
𝑎
σ
𝑧
𝑏
[13]
is reasonably applicable in
the quantum temporal difference horizon: π(s1) ← argmaxa Q(s1, a) [14]
as a
Super-Hamiltonian [15]
in contrast.
Consequently, some odd operation of form {H ± F, H ± F}1 = ±2QH, {H + F, H −
F}1 = {H ± F, QH }1 = {QH, QH}1 = 0 [15]
subsuming − ∑ Γ𝑎 a σ
𝑥
𝑎
− ∑ 𝑏𝑎 a σ
𝑧
𝑎
− ∑ w𝑎,𝑏 ab
σ
𝑧
𝑎
σ
𝑧
𝑏
[13]
is theoretically absorbable in [14]
.(See 'thought curvature’ paper for
relevant references)
8. 8
Part F – Limitations
Notably, although thought curvature may turn out to be invalid in its simple
description in relation to Artificial General Intelligence, there is a non-trivial
possibility that the math of Supermanifolds may inspire future Deep
Learning; cutting edge Deep learning work tends to consider boundaries in the
biological brain, and biological brains can be evaluated using supersymmetric
operations.
In broader words, I consider the following evidence:
(1) Manifolds are in the regime of very general algorithms, where many
degrees of freedom are learnable, such that, for example: models gain the ability
to posses translation in-variance and translation variance at the same time. (i.e.
disentangling factors of variation).
(2) Given (1), and the generalizability of euclidean space, together with the
instance that there persists supersymmetric measurements in biological brains, it
is not absurd that Supermathematics or Lie Superalgebras (in
Supermanifolds) may eventually empirically apply in Deep Learning, or some
other named study of hierarchical learning in research.
Part G – Extra considerations
I am working to determine how feasible the model is.
I am working to design suitable experiments, and figuring out what type of
𝑝̂data (training samples) are sufficiently applicable to the model.
Remember, if you have good knowledge of supermathematics and
machine learning, you may pitch in for a discussion, by messaging me at
jordanmicahbennett@gmail.com.