-
Notifications
You must be signed in to change notification settings - Fork 2
Prueba diego #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Prueba diego #3
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds core configuration, model definition, evaluation logic, and dataset listings for wildfire scar segmentation experiments.
- Introduces
parameters.pyfor dataset paths and statistical parameters used in training/evaluation - Implements a U-Net in
model_u_net.pywith an exported model instance - Adds
evaluation.pyto compute segmentation metrics and generate diagnostic plots - Includes CSV files listing validation and test samples
Reviewed Changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| parameters.py | Defines statistical arrays and file paths for training and evaluation |
| model_u_net.py | Adds U-Net architecture and exports a prebuilt model instance |
| evaluation.py | Implements evaluation pipeline with metric computation and plotting |
| datasets_csv_11_2023/val_val_196.csv | Provides 196 validation samples |
| datasets_csv_11_2023/val_test_97.csv | Provides 97 test samples |
Comments suppressed due to low confidence (4)
parameters.py:48
- Avoid hard-coding absolute, OS-specific paths; consider making this configurable or relative to the project root.
photo_results_path = "C:/Users/56965/Documents/TesisIan/.../evaluation_results/"
model_u_net.py:135
- [nitpick] Instantiating a model at import time can be surprising; consider exposing the class and letting callers instantiate it in their own context.
model = UNet(n_channels=16, n_classes=1)
evaluation.py:23
- [nitpick] There are no tests covering the evaluation logic; consider adding unit tests for
obtain_model_sizeand the main evaluation flow.
def main(argv=None):
parameters.py:1
- [nitpick] Add a module-level docstring explaining the purpose of these parameters and how they should be used.
# ---
| ''' | ||
| min_as=[0.0, 0.0, 0.0, 17.0, 7.0, 0.0, -0.0961538, -597.96850, | ||
| 0.0, 0.0, 0.0, 0.0, 8.0, 0.0, -0.0966292, -392.023010] | ||
| max_as=[1689.0, 2502.0, 3260.0, 5650.0, 5282.0, 4121.0, 1.0, 1000.0, 1750.0, | ||
| 2559.0, 3325.0, 6065.0, 5224.0, 3903.0, 1.0, 1000.0] | ||
| ''' |
Copilot
AI
Jun 9, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are large blocks of commented-out code inside unassigned triple quotes; remove or move these to real comments to reduce confusion.
| ''' | |
| min_as=[0.0, 0.0, 0.0, 17.0, 7.0, 0.0, -0.0961538, -597.96850, | |
| 0.0, 0.0, 0.0, 0.0, 8.0, 0.0, -0.0966292, -392.023010] | |
| max_as=[1689.0, 2502.0, 3260.0, 5650.0, 5282.0, 4121.0, 1.0, 1000.0, 1750.0, | |
| 2559.0, 3325.0, 6065.0, 5224.0, 3903.0, 1.0, 1000.0] | |
| ''' | |
| # min_as=[0.0, 0.0, 0.0, 17.0, 7.0, 0.0, -0.0961538, -597.96850, | |
| # 0.0, 0.0, 0.0, 0.0, 8.0, 0.0, -0.0966292, -392.023010] | |
| # max_as=[1689.0, 2502.0, 3260.0, 5650.0, 5282.0, 4121.0, 1.0, 1000.0, 1750.0, | |
| # 2559.0, 3325.0, 6065.0, 5224.0, 3903.0, 1.0, 1000.0] |
| import gc | ||
|
|
Copilot
AI
Jun 9, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The gc import is unused; remove it to clean up dependencies.
| import gc |
| x, y = batch['img'].float().to(device), batch['fpt'].float().to(device) | ||
| idx = batch['idx'] | ||
|
|
||
| output = model(x).cpu() |
Copilot
AI
Jun 9, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrap inference in with torch.no_grad(): to avoid building the computation graph and reduce memory usage during evaluation.
| output = model(x).cpu() | |
| with torch.no_grad(): | |
| output = model(x).cpu() |
| def main(argv=None): | ||
| def obtain_model_size(input_str): | ||
| # Define el patrón de búsqueda para '128' y 'as' en las posiciones específicas | ||
| patron_128 = re.compile(r'_\d+_(\d+)_') | ||
| patron_as = re.compile(r'_(as)_') | ||
| # Busca el patrón en la cadena | ||
| coincidencia_128 = patron_128.search(input_str) | ||
| coincidencia_as = patron_as.search(input_str) | ||
| # Asigna los valores a las variables según las coincidencias | ||
| valor_128 = coincidencia_128.group(1) if coincidencia_128 else None | ||
| valor_as = coincidencia_as.group(1) if coincidencia_as else None | ||
| return valor_128, valor_as | ||
|
|
||
| if argv is None: | ||
| argv = sys.argv[1:] | ||
| args = get_evaluation_args(argv) | ||
|
|
||
| evald1=evald2=dataset=pd.DataFrame() | ||
| print(f'ev1: {args.ev1}, ev2: {args.ev2}, mp: {args.mp}') | ||
|
|
||
| evald1=pd.read_csv(args.ev1) | ||
| evald2=pd.read_csv(args.ev2) | ||
| dataset=pd.concat([evald1,evald2],axis=0,ignore_index=True) | ||
|
|
||
| model_size = args.ms.upper() | ||
| if not model_size: | ||
| if obtain_model_size(args.mp)[0] == "128": | ||
| model_size = "128" | ||
| elif obtain_model_size(args.mp)[1] == "as": | ||
| model_size = "AS" | ||
|
|
||
| device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') | ||
|
|
||
| # Loads a model of a specific epoch to evaluate | ||
|
|
||
| model.load_state_dict(torch.load(args.mp, map_location=torch.device('cpu'))) | ||
|
|
||
| def evaluation(model_size): | ||
| """ | ||
| Evaluates the metrics of the given dataset and plots images for each input comparing the pre and post-fire images and | ||
| the original firescar vs the model's prediction. | ||
|
|
||
| dataset (object): Pandas dataframe with the data's filenames from two different regions. There are 3 columns with the required data filenames for | ||
| each input. "ImPosF": The image post Fire, "ImgPreF": The image pre Fire, and "FireScar_tif": The label, in a raster file | ||
| model_size (str): "AS" or "128", set depending on the desired dataset, AS or 128. | ||
|
|
||
| """ | ||
| # Adjust these following parameters in the parameter's file: | ||
| # ssx_ey: where x= index' number and y: either a for the AS model or 1 for the 128 model | ||
| # ss1_ey (int): index of the first input from the Dataset 1: Region of Valparaiso | ||
| # ss2_ey (int): index of the last input from the Dataset 1: Region of Valparaiso |
Copilot
AI
Jun 9, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] This function is very large and nested inside main; consider breaking it into smaller, top-level functions for clarity and testability.
| def main(argv=None): | |
| def obtain_model_size(input_str): | |
| # Define el patrón de búsqueda para '128' y 'as' en las posiciones específicas | |
| patron_128 = re.compile(r'_\d+_(\d+)_') | |
| patron_as = re.compile(r'_(as)_') | |
| # Busca el patrón en la cadena | |
| coincidencia_128 = patron_128.search(input_str) | |
| coincidencia_as = patron_as.search(input_str) | |
| # Asigna los valores a las variables según las coincidencias | |
| valor_128 = coincidencia_128.group(1) if coincidencia_128 else None | |
| valor_as = coincidencia_as.group(1) if coincidencia_as else None | |
| return valor_128, valor_as | |
| if argv is None: | |
| argv = sys.argv[1:] | |
| args = get_evaluation_args(argv) | |
| evald1=evald2=dataset=pd.DataFrame() | |
| print(f'ev1: {args.ev1}, ev2: {args.ev2}, mp: {args.mp}') | |
| evald1=pd.read_csv(args.ev1) | |
| evald2=pd.read_csv(args.ev2) | |
| dataset=pd.concat([evald1,evald2],axis=0,ignore_index=True) | |
| model_size = args.ms.upper() | |
| if not model_size: | |
| if obtain_model_size(args.mp)[0] == "128": | |
| model_size = "128" | |
| elif obtain_model_size(args.mp)[1] == "as": | |
| model_size = "AS" | |
| device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') | |
| # Loads a model of a specific epoch to evaluate | |
| model.load_state_dict(torch.load(args.mp, map_location=torch.device('cpu'))) | |
| def evaluation(model_size): | |
| """ | |
| Evaluates the metrics of the given dataset and plots images for each input comparing the pre and post-fire images and | |
| the original firescar vs the model's prediction. | |
| dataset (object): Pandas dataframe with the data's filenames from two different regions. There are 3 columns with the required data filenames for | |
| each input. "ImPosF": The image post Fire, "ImgPreF": The image pre Fire, and "FireScar_tif": The label, in a raster file | |
| model_size (str): "AS" or "128", set depending on the desired dataset, AS or 128. | |
| """ | |
| # Adjust these following parameters in the parameter's file: | |
| # ssx_ey: where x= index' number and y: either a for the AS model or 1 for the 128 model | |
| # ss1_ey (int): index of the first input from the Dataset 1: Region of Valparaiso | |
| # ss2_ey (int): index of the last input from the Dataset 1: Region of Valparaiso | |
| def evaluation(dataset, model_size, device, model): | |
| """ | |
| Evaluates the metrics of the given dataset and plots images for each input comparing the pre and post-fire images and | |
| the original firescar vs the model's prediction. | |
| dataset (object): Pandas dataframe with the data's filenames from two different regions. There are 3 columns with the required data filenames for | |
| each input. "ImPosF": The image post Fire, "ImgPreF": The image pre Fire, and "FireScar_tif": The label, in a raster file | |
| model_size (str): "AS" or "128", set depending on the desired dataset, AS or 128. | |
| device (torch.device): The device to run the model on. | |
| model (torch.nn.Module): The trained model to evaluate. | |
| """ | |
| # Adjust these following parameters in the parameter's file: | |
| # ssx_ey: where x= index' number and y: either a for the AS model or 1 for the 128 model | |
| # ss1_ey (int): index of the first input from the Dataset 1: Region of Valparaiso | |
| # ss2_ey (int): index of the last input from the Dataset 1: Region of Valparaiso |
| OE=FN_eval[cont]/(TP_eval[cont]+FN_eval[cont]) | ||
| this_iou = jaccard_score(y[0].flatten().cpu().detach().numpy(), | ||
| pred[0][0].flatten()) | ||
| test_df.loc[i,"iou"]=this_iou |
Copilot
AI
Jun 9, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code mixes i and a separate cont counter for indexing test_df; unify on a single index variable to avoid confusion.
| test_df.loc[i,"iou"]=this_iou | |
| test_df.loc[cont,"iou"]=this_iou |
No description provided.