86 lines
1.2 KiB
Julia
86 lines
1.2 KiB
Julia
module IronpenGPU # this is a parent module
|
||
|
||
# export
|
||
|
||
|
||
""" Order by dependencies of each file. The 1st included file must not depend on any other
|
||
files and each file can only depend on the file included before it.
|
||
"""
|
||
|
||
include("type.jl")
|
||
using .type # bring type into parent module namespace
|
||
|
||
include("snnUtil.jl")
|
||
using .snnUtil
|
||
|
||
include("forward.jl")
|
||
using .forward
|
||
|
||
include("learn.jl")
|
||
using .learn
|
||
|
||
include("interface.jl")
|
||
using .interface
|
||
|
||
|
||
#------------------------------------------------------------------------------------------------100
|
||
|
||
""" version 0.0.3
|
||
Todo:
|
||
[2] implement dormant connection and pruning machanism. the longer the training the longer
|
||
0 weight stay 0.
|
||
[] using RL to control learning signal
|
||
[] consider using Dates.now() instead of timestamp because time_stamp may overflow
|
||
[] Liquid time constant. training should include adjusting α, neuron membrane potential decay factor
|
||
which defined by neuron.tau_m formula in type.jl
|
||
|
||
Change from version: 0.0.2
|
||
- knowledgeFn in GPU format
|
||
- use partial error update for computeNeuron
|
||
- frequency regulator
|
||
|
||
All features
|
||
|
||
"""
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
end # module IronpenGPU
|