Package documentation

HierarchicalTemporalMemory.DistalSynapsesType
DistalSynapses(Nₙ, Nc, k; Nseg_init=0, params)

DistalSynapses are lateral connections within a neuron layer that attach to the dendrites of neurons, not directly to the soma (neuron's center), and can therefore depolarize neurons but can't activate them. Compare with ProximalSynapses. Used in the context of the TemporalMemory.

Parameters:

  • Nₙ: number of presynaptic neurons
  • Nc: number of minicolumns in layer
  • k: neurons per minicolumn
  • Nseg_init: how many dendritic segments to begin with. More grow as needed while learning.
  • params: DistalSynapseParams with learning rates etc

Description

Synapses

Neurons have multiple signal integration zones: the soma, proximal dendrites, apical dendrites. Signals are routed to the proximal dendrites through distal synapses. This type defines both the synapses themselves and the neuron's dendrites. The synapses themselves, like the ProximalSynapses, are binary connections without weights. They have a permanence value, and above a threshold they are connected.

The synapses can be represented as an adjacency matrix of dimensions Nₙ × Nₛ: presynaptic neurons -> postsynaptic dendritic segments. This matrix is derived from the synapse permanence matrix $D_d ∈ \mathit{𝕊𝕢}^{Nₙ × Nₛ}$, which is sparse (eg 0.5% synapses). This affects the implementation of all low-level operations.

Dendritic segments

Neurons have multiple dendritic segments carrying the distal synapses, each sufficient to depolarize the neuron (make it predictive). The neuron/segment adjacency matrix neurSeg (aka NS) also has dimensions Nₙ × Nₛ.

Learning

Instead of being randomly initialized like the proximal synapses, distal synapses and dendrite segments are grown on demand: when minicolumns can't predict their activation and burst, they trigger a growth process.

The synaptic permanence itself adapts similar to the proximal synapses: synapses that correctly predict are increased, synapses that incorrectly predict are decreased. However it's a bit more complicated to define the successful distal syapses than the proximal synapses. "Winning segments" WS will adapt their synapses towards "winning neurons" WN. Since synapses are considered directional, neurons are always presynaptic and segments postsynaptic.

Winning segments are those that were predicted and then activated. Also, for every bursting minicolumn the dendrite that best "matches" the input will become winner. If there is no sufficiently matching dendrite, a new onw will grow on the neuron that has the fewest.

Winning neurons are again those that were predicted and then activated. Among the bursting minicolumns, the neurons bearing the winning segments are the winners. Both definitions of winners are aligned with establishing a causal relationship: prediction -> activation.

New synapses grow from WS towards a random sample of WN at every step. Strongly matching segments have a lower chance to grow new synapses than weakly matching segments.

Note

Adding new synapses at random indices of Dd::SparseMatrixCSC is a performance bottleneck, because it involves moving the matrix's existing data. An implementation of SparseMatrixCSC with hashmaps could help, such as SimpleSparseArrays.jl

Caching

The state of the DistalSynapses is determined by the 2 matrices $D_d, \mathit{NS}$. A few extra matrices are filled in over the evolution of the distal synapses to accelerate the computations:

  • connected caches Dd > θ_permanence
  • segCol caches the segment - column map (aka SC)
source
HierarchicalTemporalMemory.HypercubeType

Hypercube iterates over a hypercube of radius γ around xᶜ, bounded inside the space {1..sz}ᴺ.

Examples

julia> hc= Hypercube((3,3),1,(10,10));

julia> foreach(println, hc)
(2, 2)
(3, 2)
(4, 2)
(2, 3)
(3, 3)
(4, 3)
(2, 4)
(3, 4)
(4, 4)
source
HierarchicalTemporalMemory.LengthfulIterType

LengthfulIter is an iterator wrapper that allows length info known programmatically to be used. It's used because iterator transformations, like filters, don't keep the length info, which might however be known to the programmer.

source
HierarchicalTemporalMemory.NetworkMethod
Network(regions; connect_forward, connect_context)

Create a network of HTM Regions. The given regions are connected with feedforward (proximal) and/or contextual (distal) connections according to the given adjacency matrices. One region serves as the input point and another one as the output, designated by their index on the regions vector.

Inputs

  • regions::Vector{Regions}: HTM Regions that will form the network
  • connect_forward: [N+2 × N+2]: adjacency matrix for feedforward connections between regions and input/output nodes
  • connect_context: [N+2 × N+2]: adjacency matrix for contextual connections between regions and input/output nodes

TODO: remove I/O from connect_* and move them to separate inputs: TODO: MIMO, extend I/O to vectors of regions

  • in_forward: index of the region that will receive external feedforward input
  • in_context: index of the region that will receive external contextual input
  • out: index of the region that will emit network output

Adjacency matrix

The adjacency matrix has size [N+2 × N+2] because the 2 last nodes represent the input and the output connections. Since the connections are directional, the adjacency matrix has an order: pre/post.

  • The first N rows/columns refer to the connections between regions.
  • Row N+1 is the input connection
  • Column N+2 is the output connection

Example:

# vertices:
# 1 2 3 I O
c= [
  0 0 0 0 0  # 1
  1 0 0 0 0  # 2
  0 1 0 0 1  # 3
  1 0 0 0 0  # In
  0 0 0 0 0  # Out
]
# Connections:
c[2,1]  # 2 -> 1
c[3,2]  # 3 -> 2
c[4,1]  # in -> 1
c[3,5]  # 3 -> out
source
HierarchicalTemporalMemory.ProximalSynapsesType

ProximalSynapses{SynapseT<:AnySynapses,ConnectedT<:AnyConnection} are the feedforward connections between 2 neuron layers, which can activate neurons and cause them to fire.

Used in the context of the SpatialPooler.

Description

The neurons of both layers are expected to form minicolumns which share the same feedforward connections. The synapses are binary: they don't have a scalar weight, but either conduct (1) or not (0). Instead, they have a permanence value Dₚ ∈ (0,1] and a connection threshold θ.

Initialization

Let presynaptic (input) neuron xᵢ and postsynaptic (output) neuron yᵢ, and a topological I/O mapping xᵢ(yᵢ) := Hypercube(yᵢ). ∀

Synapse adaptation

They adapt with a hebbian learning rule. The adaptation has a causal and an anticausal component:

  • If the postsynaptic neuron fires and the presynaptic fired too, the synapse is strengthened
  • If the postsynaptic neuron fires, but the presynaptic didn't, the synapse is weakened

The synaptic permanences are clipped at the boundaries of 0 and 1.

A simple implementation of the learning rule would look like this, where z: input, a: output

learn!(Dₚ,z,a)= begin
  Dₚ[z,a]  .= (Dₚ[z,a].>0) .* (Dₚ[z,a]   .⊕ p⁺)
  Dₚ[.!z,a].= (Dₚ[z,a].>0) .* (Dₚ[.!z,a] .⊖ p⁻)
end

Type parameters

They allow a dense or sparse matrix representation of the synapses

  • SynapseT: DenseSynapses or SparseSynapses
  • ConnectedT: DenseConnection or SparseConnection

See also: DistalSynapses, SpatialPooler, TemporalMemory

source
HierarchicalTemporalMemory.RegionType
(r::Region)(proximal, distal=falses(0))

Stimulates the region with the given inputs, without adapting it (pure function). If distal input isn't given explicitly, recurrent connections are still used (if present).

See also Region

source
HierarchicalTemporalMemory.RegionType
Region(SPParams, TMParams; recurrent= true)

A Region is a set of neurons arranged in minicolumns with 2 input gates:

  • proximal (size: number of minicolumns Nc): feedforward input that activates some of the region's minicolumns
  • distal (size: number of neurons Nn): contextual input that causes individual neurons to become predictive and fire preferentially on the next time step

Each gate accepts 1 or more input SDRs of the correct size, which will be ORed together. If recurrent= true, the distal gate implicitly receives the same layer's state from the last time step as input (in addition to any SDR given explicitly).

The activation of minicolumns resulting from the proximal synapses is defined by the SpatialPooler algorithm, while the activation of individual neurons is defined by the TemporalMemory.

Example

r= Region(SPParams(), TMParams(), recurrent= true)
feedforward_input= bitrand(Nc(r))

# activate based on feedforward input and the region's previous activation
active, predictive= r(feedforward_input)
# learn (adapt synapses)
step!(r, feedforward_input)

# use explicit contextual input eg from another region, that will be ORed to the region's state
distal_input= sprand(Bool,Nn(r), 0.003)|> BitVector
active, predictive= r(feedforward_input, distal_input)
source
HierarchicalTemporalMemory.SPParamsType

SPParams holds the algorithm parameters for a spatial pooler with nomenclature similar to source

The dimension parameters are problem-specific and should be the first to be specified.

The tuning parameters have sensible defaults, which should be . All gated features are enabled by default.

Parameters

Dimensions

  • szᵢₙ = (32,32): input dimensions
  • szₛₚ = (32,32): output dimensions
  • γ = 6: receptive field radius (how large an input area an output minicolumn maps to). Must be <= min(szᵢₙ)

Algorithm tuning

  • s = .02: average output sparsity
  • prob_synapse = .5: probability for each element of the szᵢₙ × szₛₚ space to be a synapse. Elements that roll below this value don't form a synapse and don't get a permanence value. If this is very low, the proximal synapses matrix can become sparse.
  • θ_permanence01 = .5: synapse permanence connection threshold
  • p⁺_01 = .1 , p⁻_01 = .02: synapse permanence adaptation rate (see ProximalSynapses)
  • θ_stimulus_activate = 1: minicolumn absolute activation threshold
  • Tboost = 200.0: boosting mechanism's moving average filter period
  • β = 1.0: boosting strength

Feature gates

  • enable_local_inhibit = true
  • enable_learning = true
  • enable_boosting = true
source
HierarchicalTemporalMemory.SequenceType

Sequence(;δ,init) is an easy way to define sequences with transition function δ and starting condition init as an iterator.

Examples

Fibonacci sequence:

fibonacci_δ(a,b)= b, a+b;
fibonacci_init= 0,1;
Lazy.@as _1 HTM.Sequence(fibonacci_δ, fibonacci_init) IterTools.take(_1, 13) foreach(println, _1)

# output
0
1
1
2
3
5
8
13
21
34
55
89
144

Factorial sequence:

factorial_δ(a,b)= a*b, b+1;
factorial_init= 1,1;
Lazy.@as _1 HTM.Sequence(factorial_δ, factorial_init) IterTools.take(_1, 13) foreach(println, _1)

# output
1
1
2
6
24
120
720
5040
40320
362880
3628800
39916800
479001600
source
HierarchicalTemporalMemory.SpatialPoolerType

SpatialPooler is a learning algorithm that decorrelates the features of an input space, producing a Sparse Distributed Representation (SDR) of the input space. If defines the proximal connections of an HTM layer.

Examples

sp= SpatialPooler(SPParams(szᵢₙ=(600,), szₛₚ=(2048,)))
z= Random.bitrand(600)  # input
activation= sp(z)
# or, to adapt the SP to the input as well:
activation= step!(sp,z)

Properties

It's called SpatialPooler because input patterns that share a large number of co-active neurons (i.e., that are spatially similar) are grouped together into a common output representation. It is designed to achieve a set of computational properties that support further downstream computations with SDRs, including:

  • preserving topology of the input space by mapping similar inputs to similar outputs
  • continuously adapting to changing statistics of the input stream
  • forming fixed sparsity representations
  • being robust to noise
  • being fault tolerant

Source

Algorithm overview

Mapping I/O spaces

The spatial pooler maps an input space x to an output space y of minicolumns through a matrix of proximal synapses. The input space can optionally have a topology, which the spatial pooler will preserve by mapping output minicolumn yᵢ to a subset of the input space, a Hypercube around center xᶜ. julia xᶜ(yᵢ)= floor.(Int, (yᵢ.-1) .* (szᵢₙ./szₛₚ)) .+1`

Output activation

  1. Calculate the overlap o(yᵢ) by propagating the input activations through the proximal synapses and adjusting by boosting factors b (control mechanism that spreads out the activation pattern across understimulated neurons)
  2. Inhibition Z between yᵢ (local/global): competition where only the top-K yᵢ with the highest o(yᵢ) win; ensures sparsity
  3. Activate winning yᵢ > activation threshold (θ_stimulus_activate)

See also: sp_activate

State variables

  • synapses: includes the synapse permanence matrix Dₚ
  • åₜ: [boosting] moving (in time) average (in time) activation of each minicolumn
  • åₙ: [boosting] moving (in time) average (in neighborhood) activation of each minicolumn
  • φ: the adaptible radius of local inhibition

See also: ProximalSynapses, TemporalMemory

source
HierarchicalTemporalMemory.TMParamsType

TMParams holds the algorithm parameters for a Temporal Memory with nomenclature similar to source

Dimensions

  • Nc = 2500: number of columns
  • k = 10: cells per column
  • Nₙ = k Nc neurons in layer. The Nₛ number of dendritic segments is variable

Tuning

  • p⁺_01 = .12 , p⁻_01 = .04 ∈ [0,1]: synapse permanence adaptation rate (see ProximalSynapses)
  • LTD_p⁻_01 = .002 ∈ [0,1]: synapse long term depression rate
  • θ_permanence = .5*typemax(𝕊𝕢) ∈ 𝕊𝕢: synapse permanence connection threshold
  • init_permanence = .4*typemax(𝕊𝕢) ∈ 𝕊𝕢: permanence of a newly-grown synapse
  • synapseSampleSize = 25 ∈ ℕ: target number of matching synapses per dendrite. Represents how many bits the dendrite targets to recognize the input. Dendrites with fewer synapses matching the input might grow new synapses.
  • θ_stimulus_activate = 14 ∈ ℕ: number of matching synapses needed to depolarize the dendrite
  • θ_stimulus_learn = 12 ∈ ℕ: number of matching synapses that are insufficient to depolarize the dendrite, but sufficient to trigger learning. θ_stimulus_learn <= θ_stimulus_activate

Feature gates

  • enable_learning = true
source
HierarchicalTemporalMemory.TMStateType

TMState is a named tuple of the state variables of a temporal memory.

  • α: active neurons
  • Π: predictive neurons
  • WN: winning neurons
  • Πₛ: predictive dendritic segments (to calculate winning segments)
  • Mₛ: matching dendritic segments (didn't receive enough input to activate, but enough to learn)
  • ovp_Mₛ:
source
HierarchicalTemporalMemory.TemporalMemoryType
TemporalMemory(params::TMParams; recurrent= true, distal_input_size=0)

TemporalMemory learns to predict sequences of input Sparse Distributed Representations (SDRs), usually generated by a SpatialPooler. It learns to represent each input symbol in the temporal context of the symbols that come before it in the sequence, using the individual neurons of each minicolumn and their distal synapses.

When considering a neuron layer with proximal and distal synapses, the spatial pooler is a way to activate and learn the proximal synapses, while the temporal memory is a way to activate and learn the distal synapses.

Parameters:

  • params: TMParams defining the layer size and algorithm constants
  • recurrent: presence of recurrent distal connections between the layer's neurons
  • distal_input_size: the number of presynaptic neurons for each layer with incoming distal connections (apart from same layer's neurons)

High-order predictions and Ambiguity

The neurons-thousand-synapses paper describes the Temporal Memory's properties, and especially the ability to

  • make "high-order" predictions, based on previous inputs potentially going far back in time
  • represent ambiguity in the form of simultaneous predictions

For more information see figures 2,3 in the paper.

TM activation

Overview of the temporal memory's process:

  1. Activate neurons (fire, proximal synapses)
  2. Predict neurons (depolarize, distal/apical synapses)
  3. Learn distal/apical synapses:
    • adapt existing synapses
    • create new synapses/dendrites

Activation

An SDR input activates some minicolumns of the neuron layer. If some neurons in the minicolum were predicted at the previous step, they activate faster than the rest and inhibit them. If no neuron was predicted, all the neurons fire (minicolumn bursting).

TODO


See also: TMParams for parameter and symbol description, DistalSynapses, TMState

source
HierarchicalTemporalMemory.TruesofType

TruesOf is a non-allocating version of findall: iterates over the trues of a BitArray

Examples

julia> b= [true,true,true,false,false]|> BitVector
5-element BitVector:
 1
 1
 1
 0
 0

julia> foreach(i-> print(string(i)*" "), HTM.Truesof(b))
1 2 3
source
HierarchicalTemporalMemory.WdMethod
Wd(tm::TemporalMemory)

The connected distal synapses of the temporal memory in {presynaptic neuron × postsynaptic neuron} format. The TM natively has a synapse matrix of {presynaptic neuron × postsynaptic dendritic segments}: Wd(::DistalSynapses). It is joined with the neuron-segment adjacency matrix: NS(::DistalSynapses). This connection matrix is obtained by matrix multiplication Wd * NS.

source
HierarchicalTemporalMemory.bestmatchMethod

bestmatch(s::DistalSynapses,col, povp_Mₛ) finds the best-matching segment in a column, as long as its overlap with the input (its activation) is > θstimuluslearn. Otherwise, returns nothing.

source
HierarchicalTemporalMemory.decaySMethod

decayS(s::DistalSynapses,pMₛ,α) are the dendritic segments that should decay according to LTD (long term depression). It's the segments that at the previous moment had enough potential synapses with active neurons to "match" the input (pMₛ), but didn't contribute to their neuron firing at this moment (they didn't activate strongly enough to depolarize it).

source
HierarchicalTemporalMemory.growseg!Method

growseg!(s::DistalSynapses,WS_col::Vector{Option{Int}},burstingcolidx) grows 1 new segment for each column with nothing as winning segment and replaces nothing with it. It resizes all the DistalSynapses matrices to append new dendritic segments. Foreach bursting minicolumn without a winning segment, the neuron to grow the segment is the one with the fewest segments.

source
HierarchicalTemporalMemory.growsynapses!Method

growsynapses!(s::DistalSynapses, pWN::CellActivity,WS, povp_Mₛ) adds synapses between winning dendrites (WS) and a random sample of previously winning neurons (pWN).

For each dendrite, target neurons are selected among pWN with probability to pick each neuron determined by $\frac{ (\mathit{synapseSampleSize} - \mathit{prev_Overlap} }{ length(\mathit{pWN}) }$ (Bernoulli sampling)

This involves inserting new elements in random places of a SparseMatrixCSC and is the algorithm's performance bottleneck.

  • TODO: try replacing CSC with a Dict implementation of sparse matrices.
source
HierarchicalTemporalMemory.hcat!!Method
hcat!!(s::SparseMatrixCSC, I,V)

WARNING: s is invalid after this operation!!! Its symbol should be reassigned to the output of hcat!!. Append new columns to s with 1 value V[c] at row I[c] each. Because SparseMatrixCSC is immutable, return a new, valid SparseMatrixCSC that points to the original's data structures. For large matrices, this is much faster than hcat.

source
HierarchicalTemporalMemory.networkPlotMethod
networkPlot(network, plotFunction, extraArgs...)

Uses plotFunction to plot a network graph. Assumes that plotFunction == GraphMakie.graphplot, but might work with others. extraArgs are passed as-is to the plotting function.

source
HierarchicalTemporalMemory.predict!Method
predict!(classifier::SDRClassifier, Π,target::Int; enable_learning=true)=

Predict probability that the output represents each bucket of an arithmetic encoder. 0<=predict!<=1

source
HierarchicalTemporalMemory.propagation_delayMethod
propagation_delay(n::Network)

The shortest time after repeated stimulation with a that n(a) will be related to a (the minimum number of repeated stimulations). The network will always produce some activity, but until that point it will be unrelated to the input a. The signal traverses 1 Region inside the Network with each stimulation, therefore the number of regions between the input and the output of the network is the minimum time necessary before the output has a relationship with the input.

The network might not be stable at this point and continued stimulation with a might allow longer paths to the output to contribute.

source
HierarchicalTemporalMemory.reverse_simpleArithmeticMethod
reverse_simpleArithmetic(bucketProbDist, algorithm,params)

Reverses the simple arithmetic encoder, given the same parameters. Inputs a probability distribution across all buckets and collapses it to a single arithmetic value, representing the most likely estimation in the timeseries' domain (like a defuzzifier)

Arguments

  • bucketProbDist [nbucket]: discrete probability of each bucket [0-1]
  • algorithm: {'mean','mode','highmean'} 'highmean' is the mean of the highest-estimated values
source
HierarchicalTemporalMemory.sp_activateMethod

sp_activate(sp::SpatialPooler, z) calculates the SP's output activation for given input activation z.

Algorithm

  1. Overlap o(yᵢ)
    • Propagate the input activations through the proximal synapses (matrix multiply)
    • Apply boosting factors b: control mechanism that spreads out the activation pattern across understimulated neurons (homeostatic excitability control)
  2. Inhibition Z between yᵢ (local/global): competition where only the top-k yᵢ with the highest o(yᵢ) win; ensures sparsity The competition happens within an area around each neuron.
    • k: number of winners depends on desired sparsity (s, see SPParams) and area size
    • θ_inhibit: inhibition threshold per neighborhood $:= o(k-th y)$
    • Z(o(yᵢ)): convolution of o with θ_inhibit
  3. Activate winning yᵢ > activation threshold (θ_stimulus_activate)
Info

The local sparsity s tends to the sparsity of the entire layer as it grows larger, but for small values or small inhibition radius it diverges, because of the limited & integral number of neurons winning in each neighborhood. This could be addressed by tie breaking, but it doesn't seem to have much practical importance.

source
HierarchicalTemporalMemory.sparse_foreachMethod

sparse_foreach(f, s::SparseMatrixCSC,columnIdx) iterates over the columns of the sparse matrix s specified by columnIdx and applies f(columnElt,columnRowIdx) to each. Column elements are a @view into the nonzero elements at the given column of s.

See also: sparse_map

source
HierarchicalTemporalMemory.sparse_mapMethod

sparse_map(f, s::SparseMatrixCSC,columnIdx) maps each column of the sparse matrix s specified by columnIdx to f(columnElt,columnRowIdx). Column elements are a @view into the nonzero elements at the given column of s.

See also: sparse_foreach

source
HierarchicalTemporalMemory.step!Function
step!(r::Region, proximal, distal=falses(0); learn= true)

Stimulates the region with the given inputs, adapting it following the learning rules. If distal input isn't given explicitly, recurrent connections are still used (if present). If learn == false, the synapses are not adapted.

See also Region

source
HierarchicalTemporalMemory.step!Function
step!(sp::SpatialPooler, z::CellActivity, learn= true)

Evolves the Spatial Pooler to the next timestep by evolving each of its constituents (synapses, boosting, inhibition radius) and returns the output activation. If learn == false, the synapses are not adapted.

See also: sp_activate

source
HierarchicalTemporalMemory.step!Method

step!(s::ProximalSynapses, z,a, params) adapts the proximal synapses' permanences with a hebbian learning rule on input z and activation a. The adaptation has a causal and an anticausal component:

  • If the postsynaptic neuron fires and the presynaptic fired too, the synapse is strengthened
  • If the postsynaptic neuron fires, but the presynaptic didn't, the synapse is weakened

See alse: ProximalSynapses

source
HierarchicalTemporalMemory.step_boost!Method

step_boost!(sp::SpatialPooler,a) evolves the boosting factors b (see sp_activate). They depend on:

  • åₜ: moving average in time activation of each minicolumn
  • åₙ: moving average in neighborhood activation of each minicolum
source
HierarchicalTemporalMemory.tm_activateMethod

tm_activate(tm::TemporalMemory, c, Π) calculates

  1. which minicolumns burst,
  2. which neurons in the layer are activated
  3. which become predictive

for minicolumn activation c (size Nc) given by the SpatialPooler and the previously predictive neurons Π (size Nₙ).

Returns

  1. a: neuron activation (Nₙ)
  2. B: bursting minicolumns (Nc)
  3. WN: "winning" neurons (Nₙ)

See also: tm_predict, DistalSynapses

source
HierarchicalTemporalMemory.tm_predictMethod

tm_predict(tm::TemporalMemory, α) calculates which neurons will be predictive at the next step given the currently active neurons + distal input α. size(a) must match the presynaptic length of tm.distalSynapses.

Returns

  1. Π: predictive neurons (Nₙ)
  2. Πₛ: predictive dendritic segments ('Nₛ') (caching)
  3. Mₛ: matching dendritic segments (Nₛ) (learning)
  4. ovp_Mₛ: subthreshold-matching dendritic segments (Nₛ) (learning)
source
HierarchicalTemporalMemory.vcat!!Method
vcat!!(s::SparseMatrixCSC, J,V)

WARNING: s is invalid after this operation!!! Its symbol should be reassigned to the output of vcat!!. Append new rows to s with 1 value V[r] at column J[r] each. Because SparseMatrixCSC is immutable, return a new, valid SparseMatrixCSC that points to the original's data structures. For large matrices, this is much faster than vcat.

source
HierarchicalTemporalMemory.@percolumnMacro

@percolumn(f,a,b,k)

Macro to fold a large vector a per columns k and apply binary operator f elementwise

  • a: vector of size Nc*k, column-major (important because it will fold every k elements)
  • b: vector of size Nc
source