A clean, simple CNN for classifying flower images. Focus on image selection testing only.
Complete flower classification system with interactive testing
βββ CNN.ipynb # Clean Jupyter notebook (8 cells)
βββ flower_classifier_final.py # Complete Python script
βββ test_image_selection.py # Test file dialog
βββ Dataset/ # Image dataset
β βββ Anthurm/ # 50 images
β βββ Rose/ # 50 images
β βββ Sunflower/ # 50 images
βββ screenshots/ # Documentation images
β βββ 01_project_overview.png
β βββ 02_training_process.png
β βββ 03_file_selection.png
β βββ 04_prediction_results.png
β βββ 05_accuracy_graphs.png
- Open
CNN.ipynbin VS Code - Run cells 1-8 sequentially (
Shift+Enter) - Cell 8 opens file dialog to select your image
Model training in progress showing accuracy improvements
python flower_classifier_final.pypython test_image_selection.py
Interactive file dialog for selecting flower images
The CNN.ipynb notebook contains 9 cells:
- Imports all required libraries
- Checks dataset availability
- Shows dataset statistics
- Sets up data generators
- Configures image preprocessing
- Splits data into training/validation
- Builds CNN model with:
- 2 Convolutional layers
- 2 MaxPooling layers
- Dense layers for classification
- Trains the model for 10 epochs
- Shows training progress
- Displays final accuracy
- Saves trained model as
flower_classifier_model.h5 - Extracts class names for testing
- Plots training/validation accuracy
- Plots training/validation loss
- Shows training summary
Training and validation accuracy/loss curves showing model performance
- Defines interactive testing function
- File dialog for image selection
- Results visualization
- Execute prediction function
- Select and classify your own images
- π File Selection: Choose your own images using a file dialog
- Confidence scores for all classes
- Visual display of images with predictions
- Side-by-side visualization with confidence bars
Example prediction showing selected image and confidence scores for all flower classes
- Checks for missing dependencies
- Validates dataset structure
- Handles invalid image files
- User-friendly error messages
- Python 3.7+
- TensorFlow 2.x
- Matplotlib
- NumPy
- Scikit-learn (for evaluation metrics)
- Seaborn (for confusion matrix)
- Tkinter (for file dialog - usually included with Python)
Input (64x64x3)
β
Conv2D (32 filters, 3x3) + ReLU
β
MaxPooling2D (2x2)
β
Conv2D (64 filters, 3x3) + ReLU
β
MaxPooling2D (2x2)
β
Flatten
β
Dense (128 units) + ReLU
β
Dense (3 units) + Softmax
β
Output (3 classes)
- Training Accuracy: ~98-100%
- Validation Accuracy: ~93-97%
- Training Time: 1-2 minutes (10 epochs)
- Model Size: ~1.5MB
-
"Dataset not found"
- Ensure
Datasetfolder is in the same directory - Check folder structure matches the expected format
- Ensure
-
"Import Error"
- Install missing packages:
pip install tensorflow matplotlib numpy scikit-learn seaborn
- Install missing packages:
-
"Kernel not found"
- Install Jupyter extension in VS Code
- Select the correct Python interpreter
-
"File dialog not opening"
- Tkinter might not be installed:
pip install tk
- Tkinter might not be installed:
- Run cells in order - Each cell depends on previous ones
- Wait for training - Cell 4 takes 1-2 minutes to complete
- Test with your images - Use Cell 9 to test with your own flower photos
- Check accuracy - Use Cell 6 to see detailed performance metrics
Original Problem:
- Had a Google Colab notebook that couldn't run in VS Code
- Code used Colab-specific file upload functions
- Testing functionality was completely broken
- Code was cluttered with unnecessary comments
What We Fixed:
- β
Removed Google Colab dependencies (
google.colab.files) - β Implemented native file dialog using tkinter
- β Simplified code structure and removed clutter
- β Added proper error handling and validation
- β Enhanced visualization with side-by-side results
- β Focused on user preference (image selection only)
Input Image (64x64x3 RGB)
β
Conv2D(32 filters, 3x3) β Detects edges, corners, basic shapes
β
MaxPooling2D(2x2) β Reduces size, keeps important features
β
Conv2D(64 filters, 3x3) β Detects complex patterns like petals
β
MaxPooling2D(2x2) β Further size reduction
β
Flatten β Converts 2D features to 1D vector
β
Dense(128) β Learns feature combinations
β
Dense(3, softmax) β Outputs probabilities [Anthurm, Rose, Sunflower]
Why This Works:
- Convolution: Finds patterns regardless of position in image
- Pooling: Makes model robust to small variations
- Multiple layers: Learns hierarchy from simple to complex features
- Dense layers: Combines all learned features for final decision
What Happens During Training:
- Forward Pass: Image β CNN β Prediction
- Loss Calculation: Compare prediction vs actual label
- Backpropagation: Calculate how to adjust weights
- Weight Update: Improve model based on errors
- Repeat: For all images and epochs
Epoch-by-Epoch Learning:
- Epochs 1-3: Model learns basic features (edges, colors)
- Epochs 4-6: Recognizes shapes and patterns (petals, centers)
- Epochs 7-10: Fine-tunes decision boundaries between classes
# Step 1: Load image
img = image.load_img(img_path, target_size=(64, 64))
# Step 2: Convert to array
img_array = image.img_to_array(img) # Shape: (64, 64, 3)
# Step 3: Normalize pixels
img_array = img_array / 255.0 # Convert 0-255 to 0-1
# Step 4: Add batch dimension
img_array = np.expand_dims(img_array, axis=0) # Shape: (1, 64, 64, 3)
# Step 5: Predict
prediction = model.predict(img_array) # Output: [0.1, 0.8, 0.1]Why Each Step Matters:
- Resizing: Standardizes input size for CNN
- Normalization: Helps model train faster and more stable
- Batch dimension: CNN expects multiple images, even if just one
- Prediction: Returns probability for each class
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tkinter as tk
from tkinter import filedialogWhat happens:
- TensorFlow: Provides deep learning framework
- ImageDataGenerator: Handles image loading and preprocessing
- Tkinter: Creates native file dialog for image selection
- Other imports: NumPy for arrays, Matplotlib for visualization
train_datagen = ImageDataGenerator(
rescale=1./255, # Normalize pixels to 0-1
validation_split=0.2 # 20% for validation
)What happens:
- Rescaling: Converts pixel values from 0-255 to 0-1 range
- Validation split: Automatically reserves 20% of data for testing
- Flow from directory: Automatically loads images and creates labels
- Batch processing: Groups images for efficient GPU processing
model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(64, 64, 3)),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(2, 2)),
Flatten(),
Dense(128, activation='relu'),
Dense(3, activation='softmax')
])Layer-by-layer breakdown:
- Conv2D(32): 32 filters detect basic features β Output: 62x62x32
- MaxPool2D: Reduces size by half β Output: 31x31x32
- Conv2D(64): 64 filters detect complex features β Output: 29x29x64
- MaxPool2D: Further reduction β Output: 14x14x64
- Flatten: 2D β 1D β Output: 12,544 neurons
- Dense(128): Feature combinations β Output: 128 neurons
- Dense(3): Final classification β Output: 3 probabilities
history = model.fit(
train_generator,
epochs=10,
validation_data=validation_generator
)What happens:
- 10 epochs: Model sees all training data 10 times
- Batch processing: Processes 32 images at a time
- Validation: Tests on unseen data after each epoch
- History tracking: Records accuracy and loss for plotting
model.save('flower_classifier_model.h5')
class_names = list(train_generator.class_indices.keys())What happens:
- Save model: Stores complete architecture + trained weights
- Extract classes: Gets ['Anthurm', 'Rose', 'Sunflower'] for later use
- H5 format: Efficient binary format for neural networks
plt.plot(history.history['accuracy'], label='Training')
plt.plot(history.history['val_accuracy'], label='Validation')What happens:
- Training curves: Shows how accuracy improved over epochs
- Validation tracking: Ensures model isn't overfitting
- Loss curves: Shows how error decreased during training
def select_and_predict_image():
root = tk.Tk()
root.withdraw()
img_path = filedialog.askopenfilename(...)What happens:
- Tkinter setup: Creates hidden window for file dialog
- File dialog: Opens native OS file picker
- Image loading: Loads and preprocesses selected image
- Prediction: Runs image through trained CNN
- Visualization: Shows image + confidence bars side-by-side
select_and_predict_image()What happens:
- File dialog opens: User selects flower image
- Image preprocessing: Resize, normalize, add batch dimension
- CNN prediction: Forward pass through trained network
- Results display: Image + probability bars + console output
- Purpose: Deep learning framework
- Why chosen: Industry standard, excellent documentation
- Key features: GPU acceleration, automatic differentiation
- In our project: Builds and trains CNN model
- Purpose: Efficient image loading and preprocessing
- Why chosen: Handles large datasets without memory issues
- Key features: Automatic resizing, normalization, augmentation
- In our project: Loads flower images in batches
- Purpose: Native file selection interface
- Why chosen: Cross-platform, built into Python
- Key features: OS-native appearance, file type filtering
- In our project: Replaces Google Colab file upload
- Purpose: Display images and graphs
- Why chosen: Integrates well with Jupyter notebooks
- Key features: Subplots, customizable styling
- In our project: Shows prediction results and training curves
- Purpose: Efficient numerical operations
- Why chosen: Foundation for all ML libraries
- Key features: Fast array operations, broadcasting
- In our project: Image data manipulation and processing
- Deep Learning: How CNNs work for image classification
- Computer Vision: Image preprocessing and feature extraction
- Software Engineering: Clean code, error handling, user interfaces
- Data Science: Train/validation splits, performance metrics
- Python Programming: Libraries integration, file handling
- Building neural networks from scratch
- Data preprocessing and visualization
- Creating interactive user interfaces
- Model training and evaluation
- Debugging and troubleshooting
- Agriculture: Automated plant disease detection
- Botany: Species identification for research
- Mobile Apps: Plant identification applications
- E-commerce: Automatic product categorization
- Education: Interactive learning tools
- Add more flower classes
- Implement data augmentation
- Try transfer learning with pre-trained models
- Deploy as a web application
- Add real-time camera classification
You can collect same vertion of google colab codes from this link- https://colab.research.google.com/drive/16VEieLiiel4eTJ2I8aeueMUnGI5EmoF2?usp=sharing
Happy Flower Classification! πΊπ€
This project demonstrates the complete journey from a broken Colab notebook to a production-ready flower classification system with clean code, proper error handling, and user-friendly interface.