91 lines
1.5 KiB
Julia
91 lines
1.5 KiB
Julia
module IronpenGPU # this is a parent module
|
||
|
||
# export
|
||
|
||
|
||
""" Order by dependencies of each file. The 1st included file must not depend on any other
|
||
files and each file can only depend on the file included before it.
|
||
"""
|
||
|
||
include("type.jl")
|
||
using .type # bring type into parent module namespace
|
||
|
||
include("snnUtil.jl")
|
||
using .snnUtil
|
||
|
||
include("forward.jl")
|
||
using .forward
|
||
|
||
include("learn.jl")
|
||
using .learn
|
||
|
||
include("interface.jl")
|
||
using .interface
|
||
|
||
|
||
#------------------------------------------------------------------------------------------------100
|
||
|
||
""" version 0.0.7
|
||
Todo:
|
||
[] add voltage regulator
|
||
[] synaptic liquidity range 0 to 100,000 -> 1.0 to 0.99
|
||
[] add weight liquidity
|
||
[-] add temporal summation in addition to already used spatial summation.
|
||
CANCELLED, spatial summation every second until membrane potential reach a threshold
|
||
is in itself a temporal summation.
|
||
[4] implement dormant connection and pruning machanism. the longer the training the longer
|
||
0 weight stay 0.
|
||
[] using RL to control learning signal
|
||
[] consider using Dates.now() instead of timestamp because time_stamp may overflow
|
||
[] Liquid time constant. training should include adjusting α, neuron membrane potential decay factor
|
||
which defined by neuron.tau_m formula in type.jl
|
||
|
||
Change from version: 0.0.6
|
||
-
|
||
|
||
All features
|
||
- excitatory/inhabitory matrix
|
||
- neuroplasticity
|
||
"""
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
end # module IronpenGPU
|