Skip to content

vtortega/Denoising-autoencoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

Denoising images from MNIST data set

The idea here is to test the denoising strategy with autoencoders and experiment with various elements.

We have two models. One small simple model and one bigger, longer and smarter model.

The watermark of choice to noise the images was an "X" of size 5 x 5, and with a function that inserts this X in the image at a random position, we poluted the train and test images with these watermarks and removed them with the Denosing algorithm.

The small model

small_model

The big model

big_model

Some results

We have the first line with the original images. The midle images with the watermarks. The bottom images after going through the model.

Small model

15 Epochs, 1 Watermark

small_15_1mark

50 Epochs, 1 Watermark

small_50e_1wm

50 Epochs, 5 Watermarks

small_50e_5wm

100 Epochs, 10 Watermarks

I didn't understand what happened here. I tried to run at higher epochs and got this. Will come back to investigate further some time.

small_100e_10wm

Big Model

This model took way more time to run, in my system, about 5 minutes per epoch. And it takes way more memory too Of course it was overkill for this dataset, but we can learn a lot with an exageration.

The big model was 3 orders of magnitutde bigger then the small model.

15 Epochs, 1 Watermark

big_15e_1wm

15 Epochs, 10 Watermarks

big_15e_10m

2 Epochs, 1 Watermarks

Here we can see how a net can behave badly at weird situations, like 2 epochs.

big_2e_1wm

5 Epochs, 10 Watermarks

big_5e_10m

5 Epochs, 20 Watermarks

big_5e_20wm

10 Epochs, 30 Watermarks

This one surprised me, is was really able to remove the watermarks very well, it could be confusing even for a human at a first look.

big_10e_30wm

Conclusion

Big and smart models are indeed better, way better, who would have thought? But at what price? That's an answer only your case will you, but a few learnings that we can take from this is that models are made to be run for a higher number of epochs, the need to make a giant model and run it a few times seems to make little sense in relation to it's costs.

The design of the smaller model could be one of the not so good ones . The making of a better small model and an intermediate one will get a visit soon so to arrive at a better undestanding of which one to choose when the time comes. But for now, that's what I got.

About

Denoising autoencoder, using CNNs as layers and a with regularization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published