update
This commit is contained in:
@@ -1,5 +1,42 @@
|
||||
module IronpenGPU
|
||||
|
||||
greet() = print("Hello World!")
|
||||
# export
|
||||
|
||||
|
||||
""" Order by dependencies of each file. The 1st included file must not depend on any other
|
||||
files and each file can only depend on the file included before it.
|
||||
"""
|
||||
|
||||
include("types.jl")
|
||||
using .types # bring model into this module namespace (this module is a parent module)
|
||||
|
||||
include("snnUtils.jl")
|
||||
using .snn_utils
|
||||
|
||||
include("interface.jl")
|
||||
using .interface
|
||||
|
||||
|
||||
#------------------------------------------------------------------------------------------------100
|
||||
|
||||
"""
|
||||
Todo:
|
||||
[*1] knowledgeFn in GPU format
|
||||
[] use partial error update for computeNeuron
|
||||
[] use integrate_neuron_params synapticConnectionPercent = 20%
|
||||
[2] implement dormant connection and pruning machanism. the longer the training the longer
|
||||
0 weight stay 0.
|
||||
[] using RL to control learning signal
|
||||
[] consider using Dates.now() instead of timestamp because time_stamp may overflow
|
||||
[] Liquid time constant. training should include adjusting α, neuron membrane potential decay factor
|
||||
which defined by neuron.tau_m formula in type.jl
|
||||
|
||||
Change from version:
|
||||
-
|
||||
|
||||
All features
|
||||
|
||||
"""
|
||||
|
||||
|
||||
end # module IronpenGPU
|
||||
|
||||
Reference in New Issue
Block a user