Skip to content

Continuously improving a neural network over time using small batches. #135

@BjarkeCK

Description

@BjarkeCK

Hey, first of thanks for a fantastic library!

The library is really easy and simple to use if you have a large dataset and want to train a network in one go.

But I'm building a DQN and I want to continuously improve a neural network from small batches of training data, with as little overhead as possible. Is that something that's easily possible in SharpLearning?

Right now, the only way I see it can be achive, is by doing something like this:

var net = new NeuralNet();
// ...

while (true)
{
    var learner = new NeuralNetLearner(net, new CopyTargetEncoder(), new SquareLoss());
    // ...
    net = learner.Learn(observations, targets);
}

However there's a lot of overhead and data copying going on there. Are there better ways to go about it?

Thanks :)

Edit 1 : Seems like my example doesn't work either since the weights are randomized when a learning begins.

Edit 2: My seccond attempt, throws a nullreference exception on net.Forward(input, output). (Allthough I imagine that this is not a very good way to go about it either? And probably wrong on many levels 😊)

var delta = Matrix<float>.Build.Dense(1, 1);
var input = Matrix<float>.Build.Dense(inputCount, 1);
var output = Matrix<float>.Build.Dense(1, 1);

while (true)
{
        PopulateInput(input)
        net.Forward(input, output);
        var expected = GetExpected();
        delta[0, 0] = (float)(expected - output[0, 0]);
        net.Backward(delta);
}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions