Skip to content

Commit 2c4aee4

Browse files
committed
Updated documentation in Readme
1 parent 83e74ec commit 2c4aee4

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

README.md

+8-8
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ outputs = 1
2424
network = NeuralNetwork(inputs, outputs, cost="mse")
2525

2626
# Add 2 hidden layers with 16 neurons each and activation function 'tanh'
27-
network.addLayer(16, activation_function="tanh")
28-
network.addLayer(16, activation_function="tanh")
27+
network.add_layer(16, activation_function="tanh")
28+
network.add_layer(16, activation_function="tanh")
2929

3030
# Finish the neural network by adding the output layer with sigmoid activation function.
3131
network.compile(activation_function="sigmoid")
@@ -44,9 +44,9 @@ input_file = "inputs.csv"
4444
target_file = "targets.csv"
4545

4646
# Create a dataset object with the same inputs and outputs defined for the network.
47-
datasetCreator = Dataset(inputs, outputs)
48-
datasetCreator.makeDataset(input_file, target_file)
49-
data, size = datasetCreator.getRawData()
47+
dataset_handler = Dataset(inputs, outputs)
48+
dataset_handler.make_dataset(input_file, target_file)
49+
data, size = dataset_handler.get_raw_data()
5050
```
5151

5252
If you want to manually make a dataset, follow these rules:
@@ -80,15 +80,15 @@ For eg, a typical XOR data set looks something like :
8080
### Training The network
8181
The library provides a *Train* function which accepts the dataset, dataset size, and two optional parameters epochs, and logging.
8282
```python3
83-
def Train(dataset, size, epochs=5000, logging=True) :
83+
def Train(self, dataset: T_Dataset, size, epochs=100, logging=False, epoch_logging=True, prediction_evaulator=None):
8484
....
8585
....
8686
```
8787
For Eg: If you want to train your network for 1000 epochs.
8888
```python3
8989
>>> network.Train(data, size, epochs=1000)
9090
```
91-
Notice that I didn't change the value of log_outputs as I want the output to printed for each epoch.
91+
Notice that I didn't change the value of `logging` as I want the output to be printed for each epoch.
9292

9393

9494
### Debugging
@@ -109,7 +109,7 @@ To take a look at all the layers' info
109109

110110
Sometimes, learning rate might have to be altered for better convergence.
111111
```python3
112-
>>> network.setLearningRate(0.1)
112+
>>> network.set_learning_rate(0.1)
113113
```
114114

115115
### Exporting Model

0 commit comments

Comments
 (0)