Skip to content

Files

Latest commit

e9996e2 · Mar 23, 2025

History

History
39 lines (39 loc) · 2.37 KB

README.md

File metadata and controls

39 lines (39 loc) · 2.37 KB

Exercise Denoise Cosine

The goal of this exercise is to implement a multilayer dense neuralnetwork from scratch using C++. Specifically, you will implement gradient descent and use it to learn a cosine function.

  • First, take a look and understand array datatype defined in line 8 in src/utils.cpp.
  • Further now, code the linear algebra operations such as matrix multplication, addition, subtraction, hadamard (element-wise) product, matrix elements sum, transpose, and matrix power (element-wise) in src/utils.cpp.
  • You can check your implementation of functions in src/utils.cpp by running tests with following commands
cd <exercise_folder>/tests
g++ -o test_utils_executable test_utils.cpp
./test_utils_executable
  • If you see no output by running the executable, this means the implementation is right.
  • Navigate to src/mlutils.cpp and code sigmoid activation function as first step.
σ ( x ) = 1 1 + e x
  • In next step, given ground truth and predictions, compute Mean Square Error (MSE) in cost function as follows
M S E = 1 2 ( y h ) 2
  • Similarly you can test your implementation of src/mlutils.cpp by executing test_mlutils.cpp.
  • Navigate to src/denoise_cosine.cpp with the above custom datatype, declare W 1 , W 2 , b i a s variables and intialise the weights using the corresponding initialisation function in src/utils.cpp. Similarly declare the gradient variables.
  • Code the forward pass in network function in src/denoise_cosine.cpp and call this forward pass in main function training followed by above implemented loss function.
  • As a next step, derive the gradients for each variable and implement in compute_gradients function in src/mlutils.cpp. Note: compute_gradients function needs the above declared gradient variables as arguments and they are pass by reference, so no return type is necessary.
  • As a final training step implement the gradient descent step using the following formula.
W n e w = W o l d l r W o l d
  • Finally compute the network predictions and assign it to y_hat variable. Now you can run the C++ program by runnig the following commands
cd <exercise_folder>/src
g++ -o denoise_executable denoise_cosing.cpp
./denoise_executable
  • Finally, to check our cosine fit, you need to run the following commands
cd <exercise_folder>/src
python plot.py