Package documentation
HierarchicalTemporalMemory.𝕊𝕢rangeHierarchicalTemporalMemory.CellActivityHierarchicalTemporalMemory.DistalSynapsesHierarchicalTemporalMemory.HypercubeHierarchicalTemporalMemory.HypersphereHierarchicalTemporalMemory.LengthfulIterHierarchicalTemporalMemory.NetworkHierarchicalTemporalMemory.NetworkHierarchicalTemporalMemory.NetworkHierarchicalTemporalMemory.ProximalSynapsesHierarchicalTemporalMemory.RegionHierarchicalTemporalMemory.RegionHierarchicalTemporalMemory.SPParamsHierarchicalTemporalMemory.SequenceHierarchicalTemporalMemory.SpatialPoolerHierarchicalTemporalMemory.SpatialPoolerHierarchicalTemporalMemory.TMParamsHierarchicalTemporalMemory.TMStateHierarchicalTemporalMemory.TemporalMemoryHierarchicalTemporalMemory.TemporalMemoryHierarchicalTemporalMemory.TruesofHierarchicalTemporalMemory.𝕊𝕢HierarchicalTemporalMemory.WdHierarchicalTemporalMemory.activateNetworkHierarchicalTemporalMemory.add_edges!HierarchicalTemporalMemory.bestmatchHierarchicalTemporalMemory.bitarrayHierarchicalTemporalMemory.calculate_WS!HierarchicalTemporalMemory.countmap_emptyHierarchicalTemporalMemory.decaySHierarchicalTemporalMemory.distalSynapsesHierarchicalTemporalMemory.growseg!HierarchicalTemporalMemory.growsynapses!HierarchicalTemporalMemory.hcat!!HierarchicalTemporalMemory.leastusedcellHierarchicalTemporalMemory.networkPlotHierarchicalTemporalMemory.predict!HierarchicalTemporalMemory.propagation_delayHierarchicalTemporalMemory.proximalSynapsesHierarchicalTemporalMemory.reset!HierarchicalTemporalMemory.reverse_simpleArithmeticHierarchicalTemporalMemory.sp_activateHierarchicalTemporalMemory.sparse_foreachHierarchicalTemporalMemory.sparse_mapHierarchicalTemporalMemory.step!HierarchicalTemporalMemory.step!HierarchicalTemporalMemory.step!HierarchicalTemporalMemory.step_boost!HierarchicalTemporalMemory.tm_activateHierarchicalTemporalMemory.tm_predictHierarchicalTemporalMemory.vcat!!HierarchicalTemporalMemory.@percolumnHierarchicalTemporalMemory.@percolumn
HierarchicalTemporalMemory.𝕊𝕢range — Constant𝕊𝕢range is the domain of connection permanence quantization
HierarchicalTemporalMemory.CellActivity — TypeType of neuron layer activations
HierarchicalTemporalMemory.DistalSynapses — TypeDistalSynapses(Nₙ, Nc, k; Nseg_init=0, params)DistalSynapses are lateral connections within a neuron layer that attach to the dendrites of neurons, not directly to the soma (neuron's center), and can therefore depolarize neurons but can't activate them. Compare with ProximalSynapses. Used in the context of the TemporalMemory.
Parameters:
- Nₙ: number of presynaptic neurons
- Nc: number of minicolumns in layer
- k: neurons per minicolumn
- Nseg_init: how many dendritic segments to begin with. More grow as needed while learning.
- params:
DistalSynapseParamswith learning rates etc
Description
Synapses
Neurons have multiple signal integration zones: the soma, proximal dendrites, apical dendrites. Signals are routed to the proximal dendrites through distal synapses. This type defines both the synapses themselves and the neuron's dendrites. The synapses themselves, like the ProximalSynapses, are binary connections without weights. They have a permanence value, and above a threshold they are connected.
The synapses can be represented as an adjacency matrix of dimensions Nₙ × Nₛ: presynaptic neurons -> postsynaptic dendritic segments. This matrix is derived from the synapse permanence matrix $D_d ∈ \mathit{𝕊𝕢}^{Nₙ × Nₛ}$, which is sparse (eg 0.5% synapses). This affects the implementation of all low-level operations.
Dendritic segments
Neurons have multiple dendritic segments carrying the distal synapses, each sufficient to depolarize the neuron (make it predictive). The neuron/segment adjacency matrix neurSeg (aka NS) also has dimensions Nₙ × Nₛ.
Learning
Instead of being randomly initialized like the proximal synapses, distal synapses and dendrite segments are grown on demand: when minicolumns can't predict their activation and burst, they trigger a growth process.
The synaptic permanence itself adapts similar to the proximal synapses: synapses that correctly predict are increased, synapses that incorrectly predict are decreased. However it's a bit more complicated to define the successful distal syapses than the proximal synapses. "Winning segments" WS will adapt their synapses towards "winning neurons" WN. Since synapses are considered directional, neurons are always presynaptic and segments postsynaptic.
Winning segments are those that were predicted and then activated. Also, for every bursting minicolumn the dendrite that best "matches" the input will become winner. If there is no sufficiently matching dendrite, a new onw will grow on the neuron that has the fewest.
Winning neurons are again those that were predicted and then activated. Among the bursting minicolumns, the neurons bearing the winning segments are the winners. Both definitions of winners are aligned with establishing a causal relationship: prediction -> activation.
New synapses grow from WS towards a random sample of WN at every step. Strongly matching segments have a lower chance to grow new synapses than weakly matching segments.
Adding new synapses at random indices of Dd::SparseMatrixCSC is a performance bottleneck, because it involves moving the matrix's existing data. An implementation of SparseMatrixCSC with hashmaps could help, such as SimpleSparseArrays.jl
Caching
The state of the DistalSynapses is determined by the 2 matrices $D_d, \mathit{NS}$. A few extra matrices are filled in over the evolution of the distal synapses to accelerate the computations:
connectedcachesDd > θ_permanencesegColcaches the segment - column map (akaSC)
HierarchicalTemporalMemory.Hypercube — TypeHypercube iterates over a hypercube of radius γ around xᶜ, bounded inside the space {1..sz}ᴺ.
Examples
julia> hc= Hypercube((3,3),1,(10,10));
julia> foreach(println, hc)
(2, 2)
(3, 2)
(4, 2)
(2, 3)
(3, 3)
(4, 3)
(2, 4)
(3, 4)
(4, 4)HierarchicalTemporalMemory.Hypersphere — TypeHypersphere is approximated with a Hypercube for simplicity
See also: Hypercube
HierarchicalTemporalMemory.LengthfulIter — TypeLengthfulIter is an iterator wrapper that allows length info known programmatically to be used. It's used because iterator transformations, like filters, don't keep the length info, which might however be known to the programmer.
HierarchicalTemporalMemory.Network — TypeEvaluate a "vertex map", applying the Network's state to each region
HierarchicalTemporalMemory.Network — TypeNetwork{T<:Integer}Network of HTM Regions connected with feedforward (proximal) and/or contextual (distal) connections.
HierarchicalTemporalMemory.Network — MethodNetwork(regions; connect_forward, connect_context)Create a network of HTM Regions. The given regions are connected with feedforward (proximal) and/or contextual (distal) connections according to the given adjacency matrices. One region serves as the input point and another one as the output, designated by their index on the regions vector.
Inputs
regions::Vector{Regions}: HTMRegionsthat will form the networkconnect_forward: [N+2 × N+2]: adjacency matrix for feedforward connections between regions and input/output nodesconnect_context: [N+2 × N+2]: adjacency matrix for contextual connections between regions and input/output nodes
TODO: remove I/O from connect_* and move them to separate inputs: TODO: MIMO, extend I/O to vectors of regions
in_forward: index of the region that will receive external feedforward inputin_context: index of the region that will receive external contextual inputout: index of the region that will emit network output
Adjacency matrix
The adjacency matrix has size [N+2 × N+2] because the 2 last nodes represent the input and the output connections. Since the connections are directional, the adjacency matrix has an order: pre/post.
- The first N rows/columns refer to the connections between regions.
- Row N+1 is the input connection
- Column N+2 is the output connection
Example:
# vertices:
# 1 2 3 I O
c= [
0 0 0 0 0 # 1
1 0 0 0 0 # 2
0 1 0 0 1 # 3
1 0 0 0 0 # In
0 0 0 0 0 # Out
]
# Connections:
c[2,1] # 2 -> 1
c[3,2] # 3 -> 2
c[4,1] # in -> 1
c[3,5] # 3 -> outHierarchicalTemporalMemory.ProximalSynapses — TypeProximalSynapses{SynapseT<:AnySynapses,ConnectedT<:AnyConnection} are the feedforward connections between 2 neuron layers, which can activate neurons and cause them to fire.
Used in the context of the SpatialPooler.
Description
The neurons of both layers are expected to form minicolumns which share the same feedforward connections. The synapses are binary: they don't have a scalar weight, but either conduct (1) or not (0). Instead, they have a permanence value Dₚ ∈ (0,1] and a connection threshold θ.
Initialization
Let presynaptic (input) neuron xᵢ and postsynaptic (output) neuron yᵢ, and a topological I/O mapping xᵢ(yᵢ) := Hypercube(yᵢ). ∀
Synapse adaptation
They adapt with a hebbian learning rule. The adaptation has a causal and an anticausal component:
- If the postsynaptic neuron fires and the presynaptic fired too, the synapse is strengthened
- If the postsynaptic neuron fires, but the presynaptic didn't, the synapse is weakened
The synaptic permanences are clipped at the boundaries of 0 and 1.
A simple implementation of the learning rule would look like this, where z: input, a: output
learn!(Dₚ,z,a)= begin
Dₚ[z,a] .= (Dₚ[z,a].>0) .* (Dₚ[z,a] .⊕ p⁺)
Dₚ[.!z,a].= (Dₚ[z,a].>0) .* (Dₚ[.!z,a] .⊖ p⁻)
endType parameters
They allow a dense or sparse matrix representation of the synapses
SynapseT:DenseSynapsesorSparseSynapsesConnectedT:DenseConnectionorSparseConnection
See also: DistalSynapses, SpatialPooler, TemporalMemory
HierarchicalTemporalMemory.Region — Type(r::Region)(proximal, distal=falses(0))Stimulates the region with the given inputs, without adapting it (pure function). If distal input isn't given explicitly, recurrent connections are still used (if present).
See also Region
HierarchicalTemporalMemory.Region — TypeRegion(SPParams, TMParams; recurrent= true)A Region is a set of neurons arranged in minicolumns with 2 input gates:
- proximal (size: number of minicolumns Nc): feedforward input that activates some of the region's minicolumns
- distal (size: number of neurons Nn): contextual input that causes individual neurons to become predictive and fire preferentially on the next time step
Each gate accepts 1 or more input SDRs of the correct size, which will be ORed together. If recurrent= true, the distal gate implicitly receives the same layer's state from the last time step as input (in addition to any SDR given explicitly).
The activation of minicolumns resulting from the proximal synapses is defined by the SpatialPooler algorithm, while the activation of individual neurons is defined by the TemporalMemory.
Example
r= Region(SPParams(), TMParams(), recurrent= true)
feedforward_input= bitrand(Nc(r))
# activate based on feedforward input and the region's previous activation
active, predictive= r(feedforward_input)
# learn (adapt synapses)
step!(r, feedforward_input)
# use explicit contextual input eg from another region, that will be ORed to the region's state
distal_input= sprand(Bool,Nn(r), 0.003)|> BitVector
active, predictive= r(feedforward_input, distal_input)HierarchicalTemporalMemory.SPParams — TypeSPParams holds the algorithm parameters for a spatial pooler with nomenclature similar to source
The dimension parameters are problem-specific and should be the first to be specified.
The tuning parameters have sensible defaults, which should be . All gated features are enabled by default.
Parameters
Dimensions
szᵢₙ = (32,32): input dimensionsszₛₚ = (32,32): output dimensionsγ = 6: receptive field radius (how large an input area an output minicolumn maps to). Must be <= min(szᵢₙ)
Algorithm tuning
s = .02: average output sparsityprob_synapse = .5: probability for each element of theszᵢₙ × szₛₚspace to be a synapse. Elements that roll below this value don't form a synapse and don't get a permanence value. If this is very low, the proximal synapses matrix can become sparse.θ_permanence01 = .5: synapse permanence connection thresholdp⁺_01 = .1 , p⁻_01 = .02: synapse permanence adaptation rate (seeProximalSynapses)θ_stimulus_activate = 1: minicolumn absolute activation thresholdTboost = 200.0: boosting mechanism's moving average filter periodβ = 1.0: boosting strength
Feature gates
enable_local_inhibit = trueenable_learning = trueenable_boosting = true
HierarchicalTemporalMemory.Sequence — TypeSequence(;δ,init) is an easy way to define sequences with transition function δ and starting condition init as an iterator.
Examples
Fibonacci sequence:
fibonacci_δ(a,b)= b, a+b;
fibonacci_init= 0,1;
Lazy.@as _1 HTM.Sequence(fibonacci_δ, fibonacci_init) IterTools.take(_1, 13) foreach(println, _1)
# output
0
1
1
2
3
5
8
13
21
34
55
89
144Factorial sequence:
factorial_δ(a,b)= a*b, b+1;
factorial_init= 1,1;
Lazy.@as _1 HTM.Sequence(factorial_δ, factorial_init) IterTools.take(_1, 13) foreach(println, _1)
# output
1
1
2
6
24
120
720
5040
40320
362880
3628800
39916800
479001600HierarchicalTemporalMemory.SpatialPooler — TypeSpatialPooler is a learning algorithm that decorrelates the features of an input space, producing a Sparse Distributed Representation (SDR) of the input space. If defines the proximal connections of an HTM layer.
Examples
sp= SpatialPooler(SPParams(szᵢₙ=(600,), szₛₚ=(2048,)))
z= Random.bitrand(600) # input
activation= sp(z)
# or, to adapt the SP to the input as well:
activation= step!(sp,z)Properties
It's called SpatialPooler because input patterns that share a large number of co-active neurons (i.e., that are spatially similar) are grouped together into a common output representation. It is designed to achieve a set of computational properties that support further downstream computations with SDRs, including:
- preserving topology of the input space by mapping similar inputs to similar outputs
- continuously adapting to changing statistics of the input stream
- forming fixed sparsity representations
- being robust to noise
- being fault tolerant
Algorithm overview
Mapping I/O spaces
The spatial pooler maps an input space x to an output space y of minicolumns through a matrix of proximal synapses. The input space can optionally have a topology, which the spatial pooler will preserve by mapping output minicolumn yᵢ to a subset of the input space, a Hypercube around center xᶜ. julia xᶜ(yᵢ)= floor.(Int, (yᵢ.-1) .* (szᵢₙ./szₛₚ)) .+1`
Output activation
- Calculate the overlap
o(yᵢ)by propagating the input activations through the proximal synapses and adjusting by boosting factorsb(control mechanism that spreads out the activation pattern across understimulated neurons) - Inhibition
Zbetweenyᵢ(local/global): competition where only the top-Kyᵢwith the highesto(yᵢ)win; ensures sparsity - Activate winning
yᵢ> activation threshold (θ_stimulus_activate)
See also: sp_activate
State variables
synapses: includes the synapse permanence matrix Dₚåₜ: [boosting] moving (in time) average (in time) activation of each minicolumnåₙ: [boosting] moving (in time) average (in neighborhood) activation of each minicolumnφ: the adaptible radius of local inhibition
See also: ProximalSynapses, TemporalMemory
HierarchicalTemporalMemory.SpatialPooler — MethodReturn the activation pattern of the SpatialPooler for the given input activation.
z: CellActivity
Example
sp= SpatialPooler(SPParams(szᵢₙ=(600,), szₛₚ=(2048,)))
z= Random.bitrand(600) # input
activation= sp(z)
# or, to adapt the SP to the input as well:
activation= step!(sp,z)For details see: sp_activate
HierarchicalTemporalMemory.TMParams — TypeTMParams holds the algorithm parameters for a Temporal Memory with nomenclature similar to source
Dimensions
Nc = 2500: number of columnsk = 10: cells per columnNₙ = k Ncneurons in layer. TheNₛnumber of dendritic segments is variable
Tuning
p⁺_01 = .12 , p⁻_01 = .04 ∈ [0,1]: synapse permanence adaptation rate (seeProximalSynapses)LTD_p⁻_01 = .002 ∈ [0,1]: synapse long term depression rateθ_permanence = .5*typemax(𝕊𝕢) ∈ 𝕊𝕢: synapse permanence connection thresholdinit_permanence = .4*typemax(𝕊𝕢) ∈ 𝕊𝕢: permanence of a newly-grown synapsesynapseSampleSize = 25 ∈ ℕ: target number of matching synapses per dendrite. Represents how many bits the dendrite targets to recognize the input. Dendrites with fewer synapses matching the input might grow new synapses.θ_stimulus_activate = 14 ∈ ℕ: number of matching synapses needed to depolarize the dendriteθ_stimulus_learn = 12 ∈ ℕ: number of matching synapses that are insufficient to depolarize the dendrite, but sufficient to trigger learning.θ_stimulus_learn <= θ_stimulus_activate
Feature gates
enable_learning = true
HierarchicalTemporalMemory.TMState — TypeTMState is a named tuple of the state variables of a temporal memory.
α: active neuronsΠ: predictive neuronsWN: winning neuronsΠₛ: predictive dendritic segments (to calculate winning segments)Mₛ: matching dendritic segments (didn't receive enough input to activate, but enough to learn)ovp_Mₛ:
HierarchicalTemporalMemory.TemporalMemory — TypeActive and predictive cells given the minicolumn activations and any distal input.
See also: tm_activate, tm_predict
HierarchicalTemporalMemory.TemporalMemory — TypeTemporalMemory(params::TMParams; recurrent= true, distal_input_size=0)TemporalMemory learns to predict sequences of input Sparse Distributed Representations (SDRs), usually generated by a SpatialPooler. It learns to represent each input symbol in the temporal context of the symbols that come before it in the sequence, using the individual neurons of each minicolumn and their distal synapses.
When considering a neuron layer with proximal and distal synapses, the spatial pooler is a way to activate and learn the proximal synapses, while the temporal memory is a way to activate and learn the distal synapses.
Parameters:
params:TMParamsdefining the layer size and algorithm constantsrecurrent: presence of recurrent distal connections between the layer's neuronsdistal_input_size: the number of presynaptic neurons for each layer with incoming distal connections (apart from same layer's neurons)
High-order predictions and Ambiguity
The neurons-thousand-synapses paper describes the Temporal Memory's properties, and especially the ability to
- make "high-order" predictions, based on previous inputs potentially going far back in time
- represent ambiguity in the form of simultaneous predictions
For more information see figures 2,3 in the paper.
TM activation
Overview of the temporal memory's process:
- Activate neurons (fire, proximal synapses)
- Predict neurons (depolarize, distal/apical synapses)
- Learn distal/apical synapses:
- adapt existing synapses
- create new synapses/dendrites
Activation
An SDR input activates some minicolumns of the neuron layer. If some neurons in the minicolum were predicted at the previous step, they activate faster than the rest and inhibit them. If no neuron was predicted, all the neurons fire (minicolumn bursting).
TODO
See also: TMParams for parameter and symbol description, DistalSynapses, TMState
HierarchicalTemporalMemory.Truesof — TypeTruesOf is a non-allocating version of findall: iterates over the trues of a BitArray
Examples
julia> b= [true,true,true,false,false]|> BitVector
5-element BitVector:
1
1
1
0
0
julia> foreach(i-> print(string(i)*" "), HTM.Truesof(b))
1 2 3HierarchicalTemporalMemory.𝕊𝕢 — Type𝕊𝕢 is the type of connection permanences, defining their quantization domain
HierarchicalTemporalMemory.Wd — MethodWd(tm::TemporalMemory)The connected distal synapses of the temporal memory in {presynaptic neuron × postsynaptic neuron} format. The TM natively has a synapse matrix of {presynaptic neuron × postsynaptic dendritic segments}: Wd(::DistalSynapses). It is joined with the neuron-segment adjacency matrix: NS(::DistalSynapses). This connection matrix is obtained by matrix multiplication Wd * NS.
HierarchicalTemporalMemory.activateNetwork — MethodactivateNetwork(n,f,proximal,distal) calculates each region's input and applies function f. It's the common denominator of network activation functions.
HierarchicalTemporalMemory.add_edges! — Methodadd_edges!(g,adjacency) inserts the connections from an adjacency matrix into the graph
HierarchicalTemporalMemory.bestmatch — Methodbestmatch(s::DistalSynapses,col, povp_Mₛ) finds the best-matching segment in a column, as long as its overlap with the input (its activation) is > θstimuluslearn. Otherwise, returns nothing.
HierarchicalTemporalMemory.bitarray — Methodbitarray(idx, dims)
Create a bitarray with true only at idx.
HierarchicalTemporalMemory.calculate_WS! — Methodcalculate_WS!(pΠₛ,povp_Mₛ, α,B) finds the winning segments, growing new ones where necessary.
HierarchicalTemporalMemory.countmap_empty — Methodcountmap_empty(x) [Dict] Count the frequency of occurence for each element in x
HierarchicalTemporalMemory.decayS — MethoddecayS(s::DistalSynapses,pMₛ,α) are the dendritic segments that should decay according to LTD (long term depression). It's the segments that at the previous moment had enough potential synapses with active neurons to "match" the input (pMₛ), but didn't contribute to their neuron firing at this moment (they didn't activate strongly enough to depolarize it).
HierarchicalTemporalMemory.distalSynapses — MethoddistalSynapses(r::Region) is the adjacency matrix of the Region's connected distal synapses in {pre- × post-}synaptic neuron format.
HierarchicalTemporalMemory.growseg! — Methodgrowseg!(s::DistalSynapses,WS_col::Vector{Option{Int}},burstingcolidx) grows 1 new segment for each column with nothing as winning segment and replaces nothing with it. It resizes all the DistalSynapses matrices to append new dendritic segments. Foreach bursting minicolumn without a winning segment, the neuron to grow the segment is the one with the fewest segments.
HierarchicalTemporalMemory.growsynapses! — Methodgrowsynapses!(s::DistalSynapses, pWN::CellActivity,WS, povp_Mₛ) adds synapses between winning dendrites (WS) and a random sample of previously winning neurons (pWN).
For each dendrite, target neurons are selected among pWN with probability to pick each neuron determined by $\frac{ (\mathit{synapseSampleSize} - \mathit{prev_Overlap} }{ length(\mathit{pWN}) }$ (Bernoulli sampling)
This involves inserting new elements in random places of a SparseMatrixCSC and is the algorithm's performance bottleneck.
- TODO: try replacing CSC with a Dict implementation of sparse matrices.
HierarchicalTemporalMemory.hcat!! — Methodhcat!!(s::SparseMatrixCSC, I,V)WARNING: s is invalid after this operation!!! Its symbol should be reassigned to the output of hcat!!. Append new columns to s with 1 value V[c] at row I[c] each. Because SparseMatrixCSC is immutable, return a new, valid SparseMatrixCSC that points to the original's data structures. For large matrices, this is much faster than hcat.
HierarchicalTemporalMemory.leastusedcell — Methodleastusedcell(s::DistalSynapses,col) finds the neuron of a minicolumn with the fewest dendrites.
- TODO: try Memoize.jl !
HierarchicalTemporalMemory.networkPlot — MethodnetworkPlot(network, plotFunction, extraArgs...)Uses plotFunction to plot a network graph. Assumes that plotFunction == GraphMakie.graphplot, but might work with others. extraArgs are passed as-is to the plotting function.
HierarchicalTemporalMemory.predict! — Methodpredict!(classifier::SDRClassifier, Π,target::Int; enable_learning=true)=Predict probability that the output represents each bucket of an arithmetic encoder. 0<=predict!<=1
HierarchicalTemporalMemory.propagation_delay — Methodpropagation_delay(n::Network)The shortest time after repeated stimulation with a that n(a) will be related to a (the minimum number of repeated stimulations). The network will always produce some activity, but until that point it will be unrelated to the input a. The signal traverses 1 Region inside the Network with each stimulation, therefore the number of regions between the input and the output of the network is the minimum time necessary before the output has a relationship with the input.
The network might not be stable at this point and continued stimulation with a might allow longer paths to the output to contribute.
HierarchicalTemporalMemory.proximalSynapses — MethodproximalSynapses(r::Region) is the adjacency matrix of the Region's connected proximal synapses in {pre- × post-}synaptic neuron format.
HierarchicalTemporalMemory.reset! — Methodreset!(r::Region) unlearns the region's synapses (and other internal state), restoring it to the state at construction
HierarchicalTemporalMemory.reverse_simpleArithmetic — Methodreverse_simpleArithmetic(bucketProbDist, algorithm,params)Reverses the simple arithmetic encoder, given the same parameters. Inputs a probability distribution across all buckets and collapses it to a single arithmetic value, representing the most likely estimation in the timeseries' domain (like a defuzzifier)
Arguments
- bucketProbDist [nbucket]: discrete probability of each bucket [0-1]
- algorithm: {'mean','mode','highmean'} 'highmean' is the mean of the highest-estimated values
HierarchicalTemporalMemory.sp_activate — Methodsp_activate(sp::SpatialPooler, z) calculates the SP's output activation for given input activation z.
Algorithm
- Overlap
o(yᵢ)- Propagate the input activations through the proximal synapses (matrix multiply)
- Apply boosting factors
b: control mechanism that spreads out the activation pattern across understimulated neurons (homeostatic excitability control)
- Inhibition
Zbetweenyᵢ(local/global): competition where only the top-kyᵢwith the highesto(yᵢ)win; ensures sparsity The competition happens within an area around each neuron.k: number of winners depends on desired sparsity (s, seeSPParams) and area sizeθ_inhibit: inhibition threshold per neighborhood $:= o(k-th y)$Z(o(yᵢ)): convolution ofowithθ_inhibit
- Activate winning
yᵢ> activation threshold (θ_stimulus_activate)
The local sparsity s tends to the sparsity of the entire layer as it grows larger, but for small values or small inhibition radius it diverges, because of the limited & integral number of neurons winning in each neighborhood. This could be addressed by tie breaking, but it doesn't seem to have much practical importance.
HierarchicalTemporalMemory.sparse_foreach — Methodsparse_foreach(f, s::SparseMatrixCSC,columnIdx) iterates over the columns of the sparse matrix s specified by columnIdx and applies f(columnElt,columnRowIdx) to each. Column elements are a @view into the nonzero elements at the given column of s.
See also: sparse_map
HierarchicalTemporalMemory.sparse_map — Methodsparse_map(f, s::SparseMatrixCSC,columnIdx) maps each column of the sparse matrix s specified by columnIdx to f(columnElt,columnRowIdx). Column elements are a @view into the nonzero elements at the given column of s.
See also: sparse_foreach
HierarchicalTemporalMemory.step! — Functionstep!(r::Region, proximal, distal=falses(0); learn= true)Stimulates the region with the given inputs, adapting it following the learning rules. If distal input isn't given explicitly, recurrent connections are still used (if present). If learn == false, the synapses are not adapted.
See also Region
HierarchicalTemporalMemory.step! — Functionstep!(sp::SpatialPooler, z::CellActivity, learn= true)Evolves the Spatial Pooler to the next timestep by evolving each of its constituents (synapses, boosting, inhibition radius) and returns the output activation. If learn == false, the synapses are not adapted.
See also: sp_activate
HierarchicalTemporalMemory.step! — Methodstep!(s::ProximalSynapses, z,a, params) adapts the proximal synapses' permanences with a hebbian learning rule on input z and activation a. The adaptation has a causal and an anticausal component:
- If the postsynaptic neuron fires and the presynaptic fired too, the synapse is strengthened
- If the postsynaptic neuron fires, but the presynaptic didn't, the synapse is weakened
See alse: ProximalSynapses
HierarchicalTemporalMemory.step_boost! — Methodstep_boost!(sp::SpatialPooler,a) evolves the boosting factors b (see sp_activate). They depend on:
åₜ: moving average in time activation of each minicolumnåₙ: moving average in neighborhood activation of each minicolum
HierarchicalTemporalMemory.tm_activate — Methodtm_activate(tm::TemporalMemory, c, Π) calculates
- which minicolumns burst,
- which neurons in the layer are activated
- which become predictive
for minicolumn activation c (size Nc) given by the SpatialPooler and the previously predictive neurons Π (size Nₙ).
Returns
a: neuron activation (Nₙ)B: bursting minicolumns (Nc)WN: "winning" neurons (Nₙ)
See also: tm_predict, DistalSynapses
HierarchicalTemporalMemory.tm_predict — Methodtm_predict(tm::TemporalMemory, α) calculates which neurons will be predictive at the next step given the currently active neurons + distal input α. size(a) must match the presynaptic length of tm.distalSynapses.
Returns
Π: predictive neurons (Nₙ)Πₛ: predictive dendritic segments ('Nₛ') (caching)Mₛ: matching dendritic segments (Nₛ) (learning)ovp_Mₛ: subthreshold-matching dendritic segments (Nₛ) (learning)
HierarchicalTemporalMemory.vcat!! — Methodvcat!!(s::SparseMatrixCSC, J,V)WARNING: s is invalid after this operation!!! Its symbol should be reassigned to the output of vcat!!. Append new rows to s with 1 value V[r] at column J[r] each. Because SparseMatrixCSC is immutable, return a new, valid SparseMatrixCSC that points to the original's data structures. For large matrices, this is much faster than vcat.
HierarchicalTemporalMemory.@percolumn — Macro@percolumn(f,a,b,k)
Macro to fold a large vector a per columns k and apply binary operator f elementwise
a: vector of sizeNc*k, column-major (important because it will fold everykelements)b: vector of sizeNc
HierarchicalTemporalMemory.@percolumn — Macro@percolumn(reduce,a,k)
Macro to reduce(a) per column k.