A curated collection of code, notes, and hands-on practice covering the foundational building blocks of modern AI. This repo documents my learning journey through core machine learning and deep learning concepts — with each folder dedicated to one major topic.
Folder | Description |
---|---|
linear_regression |
Basics of linear regression, gradient descent, and error minimization |
logistic_regression |
Binary classification using logistic regression with sigmoid activation and multiclass classification with softmax function |
artificial_neural_nets |
Simple neural networks from scratch and using libraries like PyTorch |
auto_differentiation |
Custom auto-diff engine implementation to understand how backpropagation works |
convolutional_neural_nets |
Image-based deep learning with CNNs, filters, pooling, and feature extraction |
recurrent_neural_nets |
Sequential data modeling with vanilla RNNs and LSTMs |
transformers |
Attention mechanisms, encoder-decoder architecture, and intro to Transformers |
- Pure Python + PyTorch implementations
- Practice scripts, learning notebooks
- Experiments, visualizations, and intuition-building mini-projects
This is both a personal learning log and a resource for anyone wanting to understand the why behind the how of AI.
- Browse each folder by topic
- Run the Python scripts or Jupyter notebooks
- Modify, break, and learn by doing
This is mainly a personal learning repo, but feel free to fork or use it for your own study. Suggestions are welcome via Issues or Discussions.