|
693 | 693 | "The main function you will need is `autograd.grad()`, which takes a scalar-valued Python function as argument, and returns another function that evaluates to its derivative. \n",
|
694 | 694 | "It's what we use in the optimization loop to perform gradient descent.\n",
|
695 | 695 | "\n",
|
696 |
| - "In addition, `autograd.numpy` is a wrapper to the NumPy library. This allows you to call your favorite NumPy methods with `autograd` keeping track of every operation so it can give you the derivative (via the chain rule).\n", |
697 |
| - "We ill import it using the alias (`as np`), consistent with the tutorials and documentation that you will find online.\n", |
698 |
| - "Up to now in the _Engineering Computations_ series of modules, we had refrained from using the aliased form of the import statements, just to have more explicit and readable code. " |
| 696 | + "In addition, `autograd.numpy` is a wrapper to the NumPy library. This allows you to call your favorite NumPy methods with `autograd` keeping track of every operation so it can give you the derivative (via the chain rule)." |
699 | 697 | ]
|
700 | 698 | },
|
701 | 699 | {
|
|
705 | 703 | "outputs": [],
|
706 | 704 | "source": [
|
707 | 705 | "# import the autograd-wrapped version of numpy\n",
|
708 |
| - "import autograd.numpy as np" |
| 706 | + "from autograd import numpy" |
709 | 707 | ]
|
710 | 708 | },
|
711 | 709 | {
|
|
732 | 730 | "metadata": {},
|
733 | 731 | "outputs": [],
|
734 | 732 | "source": [
|
735 |
| - "# note: the namespace np is the autograd wrapper to NumPy\n", |
| 733 | + "# note: the namespace numpy is the autograd wrapper to NumPy\n", |
736 | 734 | "\n",
|
737 | 735 | "def logistic(z):\n",
|
738 | 736 | " '''The logistic function'''\n",
|
739 |
| - " return 1 / (1 + np.exp(-z))\n", |
| 737 | + " return 1 / (1 + numpy.exp(-z))\n", |
740 | 738 | " \n",
|
741 | 739 | "def logistic_model(params, x):\n",
|
742 | 740 | " '''A prediction model based on the logistic function composed with wx+b\n",
|
|
756 | 754 | " model: the Python function for the logistic model\n",
|
757 | 755 | " x, y: arrays of input data to the model'''\n",
|
758 | 756 | " y_pred = model(params, x)\n",
|
759 |
| - " return -np.mean(y * np.log(y_pred) + (1-y) * np.log(1 - y_pred))" |
| 757 | + " return -numpy.mean(y * numpy.log(y_pred) + (1-y) * numpy.log(1 - y_pred))" |
760 | 758 | ]
|
761 | 759 | },
|
762 | 760 | {
|
|
816 | 814 | ],
|
817 | 815 | "source": [
|
818 | 816 | "numpy.random.seed(0)\n",
|
819 |
| - "params = np.random.rand(2)\n", |
| 817 | + "params = numpy.random.rand(2)\n", |
820 | 818 | "print(params)"
|
821 | 819 | ]
|
822 | 820 | },
|
|
891 | 889 | "source": [
|
892 | 890 | "max_iter = 5000\n",
|
893 | 891 | "i = 0\n",
|
894 |
| - "descent = np.ones(len(params))\n", |
| 892 | + "descent = numpy.ones(len(params))\n", |
895 | 893 | "\n",
|
896 |
| - "while np.linalg.norm(descent) > 0.001 and i < max_iter:\n", |
| 894 | + "while numpy.linalg.norm(descent) > 0.001 and i < max_iter:\n", |
897 | 895 | "\n",
|
898 | 896 | " descent = gradient(params, logistic_model, x_data, y_data)\n",
|
899 | 897 | " params = params - descent * 0.01\n",
|
|
0 commit comments