diff --git a/notebooks/5_local_search/1.png b/notebooks/5_local_search/1.png
new file mode 100644
index 00000000..4d0f5bd5
Binary files /dev/null and b/notebooks/5_local_search/1.png differ
diff --git a/notebooks/5_local_search/2.png b/notebooks/5_local_search/2.png
new file mode 100644
index 00000000..5e74bd6c
Binary files /dev/null and b/notebooks/5_local_search/2.png differ
diff --git a/notebooks/5_local_search/3.png b/notebooks/5_local_search/3.png
new file mode 100644
index 00000000..a16f4e3e
Binary files /dev/null and b/notebooks/5_local_search/3.png differ
diff --git a/notebooks/5_local_search/8-queene-problem.png b/notebooks/5_local_search/8-queene-problem.png
new file mode 100644
index 00000000..f848365d
Binary files /dev/null and b/notebooks/5_local_search/8-queene-problem.png differ
diff --git a/notebooks/5_local_search/8Queens_Genetics.jpg b/notebooks/5_local_search/8Queens_Genetics.jpg
new file mode 100644
index 00000000..dff5748b
Binary files /dev/null and b/notebooks/5_local_search/8Queens_Genetics.jpg differ
diff --git a/notebooks/5_local_search/Genetic_psudocode.jpg b/notebooks/5_local_search/Genetic_psudocode.jpg
new file mode 100644
index 00000000..ffc02f1e
Binary files /dev/null and b/notebooks/5_local_search/Genetic_psudocode.jpg differ
diff --git a/notebooks/5_local_search/LNS2.png b/notebooks/5_local_search/LNS2.png
new file mode 100644
index 00000000..d0c20bdc
Binary files /dev/null and b/notebooks/5_local_search/LNS2.png differ
diff --git a/notebooks/5_local_search/LNS_ex.jpg b/notebooks/5_local_search/LNS_ex.jpg
new file mode 100644
index 00000000..84b63a34
Binary files /dev/null and b/notebooks/5_local_search/LNS_ex.jpg differ
diff --git a/notebooks/5_local_search/README.md b/notebooks/5_local_search/README.md
deleted file mode 100644
index c40215c9..00000000
--- a/notebooks/5_local_search/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-# Local Search
-
-- Mahsa Amani
-- Mobina Poornemant
-- Arman Zarei
diff --git a/notebooks/5_local_search/Simulated_Annealing_psudocode.jpg b/notebooks/5_local_search/Simulated_Annealing_psudocode.jpg
new file mode 100644
index 00000000..3ad57a7d
Binary files /dev/null and b/notebooks/5_local_search/Simulated_Annealing_psudocode.jpg differ
diff --git a/notebooks/5_local_search/hc.png b/notebooks/5_local_search/hc.png
new file mode 100644
index 00000000..7a6938ed
Binary files /dev/null and b/notebooks/5_local_search/hc.png differ
diff --git a/notebooks/5_local_search/hill-climbing.png b/notebooks/5_local_search/hill-climbing.png
new file mode 100644
index 00000000..32ca3b9a
Binary files /dev/null and b/notebooks/5_local_search/hill-climbing.png differ
diff --git a/notebooks/5_local_search/index.ipynb b/notebooks/5_local_search/index.ipynb
deleted file mode 100644
index 520b3eaf..00000000
--- a/notebooks/5_local_search/index.ipynb
+++ /dev/null
@@ -1 +0,0 @@
-{"nbformat":4,"nbformat_minor":0,"metadata":{"kernelspec":{"display_name":"Python 3.8.1 64-bit","language":"python","name":"python38164bitdac97052df6b4ee89acbe124d3b23037"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.8.1"},"colab":{"name":"AI_project.ipynb","provenance":[],"collapsed_sections":[]}},"cells":[{"cell_type":"markdown","metadata":{"id":"MQGEDm7VyXRs"},"source":["# Local Search"]},{"cell_type":"markdown","metadata":{"id":"TYrJbpD7-uI9"},"source":["local search is a heuristic method for solving computationally hard optimization problems. Local search can be used on problems that can be formulated as finding a solution maximizing a criterion among a number of candidate solutions. Local search algorithms move from solution to solution in the space of candidate solutions (the search space) by applying local changes, until a solution deemed optimal is found or a time bound is elapsed."]},{"cell_type":"markdown","metadata":{"id":"rysdjagC_Pxg"},"source":["As you can see, in local search, the path to the goal is not important to us and we're only trying to find the state of the goal. In many problems our purpose is to find that goal state rather than the path that takes us there. "]},{"cell_type":"markdown","metadata":{"id":"4wFRqgEmAQrr"},"source":["#### 1. Path to goal is important\n","\n","- 8-Puzzle\n","- Chess\n","- Theorem proving\n","- Route finding"]},{"cell_type":"markdown","metadata":{"id":"z5kv_-44C1ME"},"source":["##### 1.1. Goal state itself is important\n","\n","- 8 Queens\n","- TSP\n","- Job-Shop Scheduling\n","- Automatic program generation\n","- VLSI Layout"]},{"cell_type":"markdown","metadata":{"id":"AiLqmzapD5Ns"},"source":["### 2. Partial state formulation vs. Complete state formulation"]},{"cell_type":"markdown","metadata":{"id":"Ekz71MvDEALY"},"source":["In partial state formulation, each path represent a solution to the problem (as we have seen the systematic exploration of search graph) but in complete state formulation each state represent a solution to the problem."]},{"cell_type":"markdown","metadata":{"id":"zcXO5q9hEgXy"},"source":["#### 2.1. Time and memory complexity \n","As you have seen previously, In Systematic exploration of search space, the memory was exponential. But here it's reduced to O(1) instead and the computational time complexity reduced from exponential to O(T) where we choose such T to limit the number of iterations."]},{"cell_type":"markdown","metadata":{"id":"DFtHu8WvIhtY"},"source":["## 3. Constraint Satisfaction vs. Constraint Optimization"]},{"cell_type":"markdown","metadata":{"id":"T7p_SQQsIpEF"},"source":["In constraint satisfaction problems, we look for states that satisfy some constraints (e.g. in-queens problem, where the constraints are: no two queens can attack each other). In the other hand, in constraint optimization, beside satisfying some constraints, we are looking for optimizing an objective function (whether minimizing or maximizing) (e.g. TSP where the objective function is to minimize the total weight of the edges)"]},{"cell_type":"markdown","metadata":{"id":"m5Ds2YEzJ0R3"},"source":["We can convert a constraint satisfaction problem to a constraint optimization problem. for example consider the n-queens problem. we can set our objective function \n","> h = # pairs of queens that attack each other\n","\n","or\n","\n","> h = # constraints that are violated\n","\n","and then we can solve the optimization version of the problem."]},{"cell_type":"markdown","metadata":{"id":"y6z_J1VLLqVw"},"source":["We also can convert a constraint optimization problem to **some** constraint satisfaction problems (Can do something like a binary search or we can do it in linear time. e.g. for minimizing function *f* we set a constraint *f=a* (for some reasonable a) and at each step we solve the constraint satisfaction version of the problem and decrease *a* by one)"]},{"cell_type":"markdown","metadata":{"id":"_CBxdaYDNLo8"},"source":["## 4. Trivial Algorithms"]},{"cell_type":"markdown","metadata":{"id":"fGsx6b0iNPZV"},"source":["* ### Random Sampling\n","Generate a state randomly at each step and keep the optimal one and update it at each iteration"]},{"cell_type":"markdown","metadata":{"id":"RhiBaUpDNiUL"},"source":["* ### Random Walk\n","Randonmly pick a **neighbor** of the current state"]},{"cell_type":"markdown","metadata":{"id":"8-G8UDGVPI25"},"source":["Both algorithms are asymptotically complete (If the state space is finite, each state is visited at a fixed rate asymptotically)"]},{"cell_type":"markdown","metadata":{"id":"NjlIY7GSF8DI"},"source":["## 5. Hill Climbing"]},{"cell_type":"markdown","metadata":{"id":"3-rMWzUqLJF2"},"source":["a better solution is to use a local search algorithm that continuously moves in the direction of increasing elevation/value to find the peak of the mountain or the best solution to the problem. **Hill-climbing algorithm** terminates when it reaches a peak value where no neighbor has a higher value.\n","Note that, in this algorithm nodes only contain the state and the value of the objective function in that state (not path) so may see previous states. \n","It is also called **greedy local search** as it only looks to its good immediate neighbor state and not beyond that. This may seem like a good thing, but it's not: hill climbing can get stuck in a local optimum easily so its convergence depends on the **initial state**.\n"]},{"cell_type":"markdown","metadata":{"id":"2v42PwhCU7bI"},"source":[""]},{"cell_type":"markdown","metadata":{"id":"3nVVXDhFVATk"},"source":["### 5.1. Example: 8-queens problem:\n","States: 8 queens on the board, one per column (88 ≈ 17 𝑚𝑖𝑙𝑙𝑖𝑜𝑛) \n","Successors(s): all states resulted from 𝑠 by moving a single queen to another square of the same column (8 × 7 = 56) \n","Cost function ℎ(s): number of queen pairs that are attacking each other, directly or indirectly. \n","Global minimum: ℎ(s) = 0 \n"]},{"cell_type":"markdown","metadata":{"id":"1JJlgV2HVdzn"},"source":[""]},{"cell_type":"markdown","metadata":{"id":"vlbmRuQxVq4b"},"source":["in the above example, the hill-climbing algorithm converges to h = 1 and it can't do any action to improve h, so it stuck at the local minimum. \n","### 5.2. 8-queens statistics:\n","*\tState-space of size ≈17 million \n","*\tStarting from a random state, steepest-ascent hill-climbing solves **14%** of the problem instances and 86% of the time getting stuck. \n","*\tIt takes **4 steps** on average when it succeeds, 3 when it gets stuck. \n"]},{"cell_type":"markdown","metadata":{"id":"a48zgD-nWEhx"},"source":["### 5.3. Hill-climbing properties: \n","\n","*\t**Not complete**: because doesn't have memory and may see **repetitive states** and also has issues with **local optimal**. \n","* In the worst case has a terrible running time. \n","*\tThe space complexity of O(1). \n","*\tSimple and often very fast. \n","\n","So the solutions to improve the Hill-climbing algorithm take into account **recurring states** and **local optimal**. \n"]},{"cell_type":"markdown","metadata":{"id":"yujfGOjCWiPH"},"source":["For **convex** or **concave** functions, like the examples below, the hill-climbing algorithm gets the optimal solution, because the local optimal is the **same** as global optimal. \n",""]},{"cell_type":"markdown","metadata":{"id":"KxbalXziXbUk"},"source":["### 5.4. Hill-climbing search problems:\n","*\t**Local optimal** (except convex/concave functions that mentioned before)\n","*\t**Plateau**: a flat area (flat local optima, shoulder) \n"," \n","\n"," \n"," \n"," \n","\n"]},{"cell_type":"markdown","metadata":{"id":"3VXU8UOMZ-rh"},"source":["* **Diagonal ridges**: From each local maximum all the available actions point downhill, but there is an uphill path! \n","\n"," \n",""]},{"cell_type":"markdown","metadata":{"id":"kGOGmNmjaYUg"},"source":["### 5.5. Sideway moves\n","In some problems, like n-queens, converging to local optimal isn't acceptable and should find some better solutions. \n","A solution that may help is to use **sideway moves**: If no downhill (uphill) moves, allow sideways moves in hope that the algorithm can escape shoulders. There might be a limit to the number of sideway moves allowed to avoid infinite loops. \n","For the 8-queens problem, if set 100 for the sideway moves limit, the percentage of problems solved raises from 14 to **94%**, and 21 steps are needed for every successful solution, 64 for each failure. So using sideway moves the probability of convergence to optimal solution increases but the converge time increases too. \n","When **sideway moves** are allowed, performance improves ...\n"]},{"cell_type":"markdown","metadata":{"id":"q_llp1uUa2qK"},"source":["### 5.6. Stochastic Variations\n","Another solution that helps hill-climbing to be complete, is **Stochastic Variations**. When the state-space landscape has local optima, any search that moves only in the greedy direction cannot be complete. \n","the idea of stochastic variations is to combine **random-walk** and **greedy hill-climbing**. \n","At each step do one of the following: \n","*\t**Greedy**: With probability, p moves to the neighbor with the largest value.\n","*\t**Random**: With probability, 1-p moves to a random neighbor.\n"]},{"cell_type":"markdown","metadata":{"id":"BNE0DnHFbQsm"},"source":["### 5.7. Random-restart Hill-climbing\n","All previous versions are **incomplete** because of getting stuck on local optima, but the **random-restart hill-climbing** gives a complete algorithm. \n","In random-restart hill-climbing, start with the **initial random state**, and if terminates with the failure, choose another initial random state, and so on... \n"," If **p** be the probability of success in each hill-climbing search, then the **expected number of restarts will be 1/p**. \n","When multiple restarts are allowed, performance improves...\n"]},{"cell_type":"markdown","metadata":{"id":"0-QbYjdabslA"},"source":["### 5.8. Hill-Climbing with both Random Walk & Random Sampling\n","If we want to increase the randomness we can combine ideas of **greedy local search**, **random walk**, and **random restart**.\n","At each step do one of the three with the same probability:\n","*\t**Greedy**: move to the neighbor with the largest value\n","*\t**Random Walk**: move to a random neighbor\n","*\t**Random Restart**: Start over from a new, random state\n","\n"]},{"cell_type":"markdown","metadata":{"id":"BY3kkElJb-DZ"},"source":["### 5.9. Tabu search:\n","**Tabu Search** works like hill-Climbing, but it maintains a **tabu list** of constant size, like k, to avoid getting stuck in local optima. The tabu list holds k recent used objects that are taboo to use for now. Moves that involve an object in the tabu list, are not accepted.\n","Tabu search raises the space complexity from O(1) to O(k) but in many problems that use sideway moves, it improves the performance of hill-climbing.\n"]},{"cell_type":"markdown","metadata":{"id":"6sz3ve2cyXSi"},"source":["## 6. Simulated annealing"]},{"cell_type":"markdown","metadata":{"id":"01FRBl3ayXSm"},"source":["__Simulating annealing (SA)__ is one the search algorithms.The idea which is used for SA is close to random walking in other search algorithms. SA uses physical concepts for escaping local optimas by allowing some bad moves and gradually decreasing their size and frequency because if these bad moves go on and the answer be somewhere near global optima, it will get far from global optima.This method proposed in 1983 by IBM researchers for solving VLSI layout problems."]},{"cell_type":"markdown","metadata":{"id":"1TRskALeyXSp"},"source":["Let's have a look at a physical analogy:\n","- Imagine letting a ball roll downhill on the function surface\n","- Now shake the surface, while the ball rolls\n","- Gradually reduce the amount of shaking"]},{"cell_type":"markdown","metadata":{"id":"c_c1aLqHyXSr"},"source":["The picture below demonstraits the places where ball probably will be and the next state is getting closer to goal state which is global optima:\n",""]},{"cell_type":"markdown","metadata":{"id":"_U52mJKmyXSt"},"source":["Simulated annealing refers to the process of cooling a liquid until it form crystalline shape. This physiacal process must be done slowly to form better crystalline shapes.At first molecules of liquid have too much kinetic energy and are moving so fast with brownian motions, by cooling it slowly seems that the energy is getting less and less. In this process due to the fact that we are reducing the temperature gradually, number of bad moves or moves with too much energy will decrease until it converges to global optima."]},{"cell_type":"markdown","metadata":{"id":"yIv8xVsWyXSw"},"source":["Based on this intuition:\n","- Define a variable named T for the temperature.\n","- The value of T is high at first.\n","- According to temperature schedule, reduce this value.\n"," - In high temperature probability of \"locally bad\" moves is higher.\n"," - In low temperature probability of \"locally bad\" moves is lower."]},{"cell_type":"markdown","metadata":{"id":"SadVzy29yXSy"},"source":["### 6.1. Pseudocode"]},{"cell_type":"markdown","metadata":{"id":"2-eWnSM4yXSz"},"source":["The following pseudocode presents the simulated annealing heuristic as described above:"]},{"cell_type":"markdown","metadata":{"id":"lmApoYIryXS1"},"source":[""]},{"cell_type":"markdown","metadata":{"id":"zFq3pNm5yXS3"},"source":["### 6.2. Effect of temperature"]},{"cell_type":"markdown","metadata":{"id":"jrTdqq_eyXS6"},"source":["This picture illustrates 2 points:\n","- At first, the high temprature causes more bad moves and the acceptation probability drops slowly(low slope).\n","- As the temperature decreases, the bad moves' probability converges to zero so fast(high slope)."]},{"cell_type":"markdown","metadata":{"id":"7ErvIyi9yXS7"},"source":["\n"]},{"cell_type":"markdown","metadata":{"id":"Uoh4c_-pyXS8"},"source":["In this exmaple, SA is searching for a maximum. By cooling the temperature slowly, the global maximum is found. "]},{"cell_type":"markdown","metadata":{"id":"dzrxlYXNyXS-"},"source":[""]},{"cell_type":"markdown","metadata":{"id":"vyuR-PhAyXS_"},"source":["### 6.3. Simulated Annealing in practice"]},{"cell_type":"markdown","metadata":{"id":"iFT5AY68yXTC"},"source":["__How to define this schedulability?__\n","
\n","There is only one theorem about this.\n","
\n","_Theorem_: If T is decreased sufficiently slow, global optima will be found approximately with probability of 1.\n","
\n","__Is this rate same for all problems?__\n","
\n","No, it depends on the problem.\n","
\n","__Now, Is this theorem a useful guarantee?__\n","
\n","Convergence can be guaranteed if at each step, T drops no more quickly than $\\frac{C}{log n}$, where C is a constant and problem dependent and n is the number of steps so far.In practice different Cs are used to find the best choice for problem."]},{"cell_type":"markdown","metadata":{"id":"8h3Uy0C8yXTF"},"source":["### 6.4. Other applications"]},{"cell_type":"markdown","metadata":{"id":"sFC7Qy5vyXTH"},"source":["- [Traveling salesman](https://en.wikipedia.org/wiki/Travelling_salesman_problem)\n","- [Graph partitioning](https://en.wikipedia.org/wiki/Graph_partition)\n","- [Graph coloring](https://en.wikipedia.org/wiki/Graph_coloring)\n","- [Scheduling](https://en.wikipedia.org/wiki/Scheduling_(computing))\n","- [Facility layout](https://www.managementstudyguide.com/facility-layout.htm)\n","- [Image processing](https://en.wikipedia.org/wiki/Digital_image_processing)\n","- ..."]},{"cell_type":"markdown","metadata":{"id":"pKk-vJqZyXTL"},"source":["## 7. Local beam search"]},{"cell_type":"markdown","metadata":{"id":"eg3tHaSA4IG3"},"source":["Keeping only one node in memory is an extreme reaction to memory problems.\n","
\n","__Local beam search__ is another algorithm which keeps track of k states instead of keeping only one node in memory."]},{"cell_type":"markdown","metadata":{"id":"-eg2kNln88KD"},"source":["**Now, How?**\n","
\n","- Initially: Select k states randomly. \n","- Next: Determine all successors of k states.\n","- If any successor is goal, search has finished.\n","- Else select k best from successors and repeat."]},{"cell_type":"markdown","metadata":{"id":"BtK27rSv-6_d"},"source":["This is an example of **Local Beam Search** when k = 3:"]},{"cell_type":"markdown","metadata":{"id":"r39NLpIb59Zw"},"source":["\n"]},{"cell_type":"markdown","metadata":{"id":"5gjLKGoYFCzp"},"source":["$\\star$ Note that this algorithm is not the same as k random-start searches run in parallel. In Beam search, searches that find good states recruit other searches to join them.\n"]},{"cell_type":"markdown","metadata":{"id":"V9F_sDsgMtGI"},"source":["### 7.1. Stochastic Beam Search"]},{"cell_type":"markdown","metadata":{"id":"NxVbZIVbMvtu"},"source":["Quite often, all k states end up on same local hill.\n","
\n","__So, What should be done?__\n","
\n","The idea is to use **Stochastic beam search** which chooses k successors randomly,biased towards good ones."]},{"cell_type":"markdown","metadata":{"id":"ITypdfm7AQCd"},"source":["### 7.2. Uses"]},{"cell_type":"markdown","metadata":{"id":"N58yi-AvAWLU"},"source":["This algorithm has many uses in:\n","- MT ([Machine Translation](https://en.wikipedia.org/wiki/Machine_translation))\n","- NLP ([Natural Language Processing](https://en.wikipedia.org/wiki/Natural_language_processing))"]},{"cell_type":"markdown","metadata":{"id":"qsar3LpvjjlT"},"source":["## 8. Genetic Algorithms"]},{"cell_type":"markdown","metadata":{"id":"f2RXmJtmjm-3"},"source":["- ### A variant of stochastic beam search\n","Successors can be generated by combining two parent states rather than modifying a single state\n"]},{"cell_type":"markdown","metadata":{"id":"Xy3_XnsZj67K"},"source":["### 8.1. Algorithm"]},{"cell_type":"markdown","metadata":{"id":"cFUrtth1kFkb"},"source":["- A State (solution) is represented as a string over a finite alphabet (e.g. a chromosome containing genes)\n","\n","- Start with *k* randomly generated states (**Population**)\n","\n","- Evaluation function to evaluate states (Higher values for better states) (**Fitness function**)\n","\n","- Combining two parent states and getting offsprings (**Cross-over**)\n"," - Cross-over point can be selected randomly\n","\n","- Reproduced states can be slightly modified with a probability of *P* (**mutation**)\n","\n","- The next generation of states is produces by selection (based on fitness function), crossover and mutation \n"]},{"cell_type":"markdown","metadata":{"id":"Gw4E2VAVlghB"},"source":["Keeping all these in mind, we will go into more detail in the below example"]},{"cell_type":"markdown","metadata":{"id":"Ez9hAQqyln4_"},"source":["### 8.2. Example: 8-Queens\n"]},{"cell_type":"markdown","metadata":{"id":"9ZRp1X8Ilq-Z"},"source":["- Describe the state as a string\n","\n"," "]},{"cell_type":"markdown","metadata":{"id":"-AE68ZMGmSQS"},"source":["- **Fitness function** : number of non-attacking pairs of queens (24 for above figure)"]},{"cell_type":"markdown","metadata":{"id":"vtpXLBZxmeop"},"source":["- **Cross-over** : To select some part of the state from one parent and the rest from another\n","\n"," "]},{"cell_type":"markdown","metadata":{"id":"jMLjQKI0m9hT"},"source":["- **Mutation** : To change a small part of one state with a small probability \n","\n"," "]},{"cell_type":"markdown","metadata":{"id":"gmv2GHeBnkiU"},"source":["- **Selection** : We can also have a selection step that selects from the population and the probability of the selection of each individual in population is with respect to it's fitness value (See example below)"]},{"cell_type":"markdown","metadata":{"id":"5qSmgLTSoFyk"},"source":["### 8.3. Another variant of genetic algorithm for 8-Queens"]},{"cell_type":"markdown","metadata":{"id":"x2xfQjxBoqqu"},"source":[""]},{"cell_type":"markdown","metadata":{"id":"Se5LpXkGoswV"},"source":["And in the picture below you can see the way we calculated the fitness function"]},{"cell_type":"markdown","metadata":{"id":"c8d4FFOboMpo"},"source":[""]},{"cell_type":"markdown","metadata":{"id":"N52Mb_yQoi2O"},"source":["### 8.4. Pros. and Cons."]},{"cell_type":"markdown","metadata":{"id":"q0QnVxXKpIpE"},"source":["- **Positive points**\n"," - Random exploration can find solutions that local search can't (via crossover primarily)\n","\n"," - Appealing connection to human evaluation (\"neural\" networks, and \"genetic\" algorithms are metaphors!)\n","\n","- **Negative points**\n"," - Large number of \"tunable\" parameters\n"," - Lack of good empirical studies comparing to simpler methods\n"," - Useful on some (small?) set of problems but no convincing evidence that GAs are better than hill-climbing with random restarts in general"]},{"cell_type":"markdown","metadata":{"id":"sEackcKB82NI"},"source":["## 9. Summary"]},{"cell_type":"markdown","metadata":{"id":"9eMLSNE8858v"},"source":["All the introduced methods are related to the search problems which are computationally hard. These algorithms move from solution to solution in the space of candidate solutions (the search space) by applying local changes, until a solution deemed optimal is found or a time bound is elapsed.\n","\n","As mentioned before, in the exploration of search space, the space complexity was exponential; But using these local algorithms, it is reduced to O(1) instead, and the computational time complexity, which was exponential, is reduced to O(T) where T is the number of iterations.\n","\n","\n"]},{"cell_type":"markdown","metadata":{"id":"D4TZ11yp7YyK"},"source":["## 10. References"]},{"cell_type":"markdown","metadata":{"id":"Wjid5Lxb7h9y"},"source":["\n","\n","* [AI course at Sharif University of Technology](http://ce.sharif.edu/courses/99-00/1/ce417-2/index.php/section/resources/file/resources)\n","* https://en.wikipedia.org/wiki/Simulated_annealing\n"]},{"cell_type":"code","metadata":{"id":"0QkTU_BU7pqA"},"source":[""],"execution_count":null,"outputs":[]}]}
\ No newline at end of file
diff --git a/notebooks/5_local_search/index.md b/notebooks/5_local_search/index.md
new file mode 100644
index 00000000..7f876c17
--- /dev/null
+++ b/notebooks/5_local_search/index.md
@@ -0,0 +1,509 @@
+# Local Search
+
+## Contents
+
+- [Introduction](#introduction)
+ - [What is Local Search?](#WhatIsLocalSearch)
+ - [Advantages](#Advantages)
+- [Methods](#methods)
+ - [Hill Climbing](#hillClimbing)
+ - [Tabu Search](#tabuSearch)
+ - [Local Beam Search](#localBeamSearch)
+ - [Simulated Annealing](#SimulatedAnnealing)
+ - [Genetic Algorithms](#geneticAlgorithms)
+ - [Large Neighborhood Search](#largeN)
+- [Summary](#summary)
+- [References](#references)
+
+## 1. Introduction
+
+### 1.1. What is Local Search?
+
+Local Search is a heuristic method for solving computationally hard constraint satisfaction or optimization problems. The family of local search methods is typically used in search problems where the search space is either very huge or infinite. In such Problems, classical search algorithms do not work efficiently.
+
+Usually, local search is used for problems that have a state as its solution, and for many problems the path to the final solution is irrelevant. The procedure of this method is quite simple, at first the algorithm starts from a solution that may not be optimal, or may not satisfy all the constraints, and by every step or iteration, the algorithm tries to find a slightly better solution. This gradual improvement of the solution is done by examining different neighbors of the current state and choosing the best neighbor as the next state of the search.
+
+An example of the application of local search is solving the [travelling salesman problem](https://en.wikipedia.org/wiki/Travelling_salesman_problem). In each step, we may try to replace two edges with two other edges which may result in a shorter cycle in the graph.
+
+
+
+
+
+
moves needs
+
+

moves needs
+
+##### 2.1.2.2 Stochastic hill climbing
+This version has been developed to solve *'incompletness*' of hill climbing, but does not completely solve this problem. It only helps to have chance to escape from local maximas. The different of this version and original *'hill climbing*', comes from 'choosing neighbors' step. This algorithm doesn't always choose the best neighbor, however with probability p does so, and with probability 1-p choose a random neighbor. This stochastic neighbor selection helps algorithm to be able to escape from getting stuck in local maximals. But as said before, this approach doesn't completely solve *'incompleteness'* problem of *'hill climbing'*
+
+##### 2.1.2.3 Random-restart hill climbing
+This version has been developed to solve *'local maxima'* problem. In this version we don't change the core of 'hill climbing', but we just run the original algorithm many times till we find the optimal goal. This version of *'hill climbing'* is complete with probability approaching 1, cause it will eventually generate a goal state as the initial state. If each run of *'hill climbing'* success with probabilty p, then the expected number of restarts we need will be 1/p.
+
+###### Example 2.1.3 (8-queens)
+Running 8-queens problem with this algorithm results as follow:
+- Without sideway moves: (p = 0.14)
+ - Expected number of iterations:
+- With sideway moves: (p = 0.94)
+ - Expected number of iterations:
+
+### 2.2 Tabu Search
+This algorithm is similar to *'hill climbing'* but uses a trick to prevent from stucking in local maxima. This is actualy a meta-heuristic algorithm that means it helps to guide and control actual heuristic. But how this strategy helps us to find global optimal?
+
+First of all, it has a list called 'tabu list' which has some states. Like hill climbing, from each state we are going to find its neighbors and choose one of them. But here we are not allowed to choose those states that are in tabu list, they are taboo. Also we don't stop if the best neighbor is not as good as current state, if it was so, we update best solution and move to that state and if it wasn't, we just move to the new state. The termination creteria can be a limitation on number of iterations. Below you can find a flowchart of this algorithm steps.
+
+

+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+