Skip to content

GiGiKoneti/Biological-Neuron-To-BackPropagation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 

Repository files navigation

Biological-Neuron-To-BackPropagation , an AI seminar delivered by GiGi Koneti

The mathematical evolution behind the journey of multi layer perceptron from historic perceptron algorirthm.

🎉 Presentation Shared: The Journey of the Artificial Neuron is Here!

Super excited to share my presentation on the incredible mathematical journey that built the foundations of modern AI! This repository contains the PDF titled "The Calculus of Cognition: From Biological Neuron to Backpropagation."

It was an awesome experience exploring how a simple mathematical idea, the Perceptron, evolved into the powerful structure (the MLP) that underpins today's Deep Learning revolution.


📌 Session Details

For context, this was a special seminar delivered By GiGi Koneti (me) during our 'Introduction to Artificial Intelligence' class.

  • Date: Tuesday, October 28, 2025
  • Duration: A deep dive session that ran for well over an hour, starting at 1:00 PM.
  • Setting: Room No. 312, Building Number 13.
  • Course: Semester 1, Department of Automobile Engineering.

Cover Page


🚀 The Essence: Key Highlights from the Presentation

Our journey followed the story of these computational neurons and the brilliance needed to make them truly "learn":

  • The Perceptron (1957): This was the first spark! Introduced by Frank Rosenblatt, it modeled a biological neuron perfectly—a weighted sum of inputs that "fires" if a threshold is crossed. It was AI's first true linear classifier.
  • The Problem Child (The XOR Flaw): AI hit its first major roadblock. As proven by Minsky & Papert, the single Perceptron could only draw a single straight line to classify data. It failed miserably at simple non-linear problems like the XOR gate, leading to the first "AI Winter."
  • The Breakthrough (The Multi-Layer Perceptron - MLP): The solution was surprisingly simple: stack the neurons! By adding one or more Hidden Layers, the MLP gained the power to create incredibly complex, non-linear decision boundaries.
  • The Calculus Secret Sauce (Backpropagation): The big question was how to train the weights in those hidden layers. Backpropagation (popularized by Geoffrey Hinton) is the elegant mathematical answer. It's simply the Chain Rule of differential calculus applied to efficiently spread the error backward from the output, telling every single weight how much to adjust itself.

🧠 Simple Context: How Neurons Go from Zero to Hero

If you want the full details, the presentation walks through the math in a clear, step-by-step way. Here’s the core of the evolution:

1. The Perceptron: The 'Yes/No' Machine

Think of a Perceptron as a simple decision maker. It takes inputs ($x$), assigns an importance score (weight, $w$) to each, sums them up, and then makes a final decision.

The Decision Formula (Weighted Sum): It calculates the total signal $z$ it receives: $$ z = \sum_{i=0}^{m} w_i x_i $$

If $z$ is strong enough ($z \ge \text{threshold}$), the output is 1 (Yes); otherwise, it's 0 (No). This simple structure is why it could only separate things with a straight line—it was fundamentally linear.

2. The MLP & Backpropagation: The Power of Calculus

The MLP stacked these neurons, making it non-linear. But how do we teach the inside (the hidden layers)? This is where Backpropagation comes in—it's the algorithm that learns.

Our goal is to minimize the Loss ($L$)—the difference between the network's prediction and the true answer—by adjusting the weights ($w$).

The Learning Rule (Gradient Descent): To find the fastest way down the error "hill," we move against the gradient (the slope) for every single weight.

$$ w_{new} = w_{old} - \alpha \cdot \frac{\partial L}{\partial w} $$

  • $\alpha$ is the Learning Rate (how big a step we take).
  • $\frac{\partial L}{\partial w}$ is the Partial Derivative, calculated by skillfully applying the Chain Rule across the whole network. This tells us exactly how much the error changes with respect to that specific weight.

This single equation, powered by the calculus of the Chain Rule, is what allows complex, multi-layered networks to learn from data and is the real 'secret' behind the AI we see today!

📄 Download the Full Presentation

You can download the full presentation slides here:

MLP Presentation PDF - The Calculus of Cognition

About

A Seminar on Mathematical Evolution of Multi Layer Perceptron

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •