Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError? #1

Open
Isak-Andersson opened this issue Apr 1, 2021 · 5 comments
Open

AssertionError? #1

Isak-Andersson opened this issue Apr 1, 2021 · 5 comments

Comments

@Isak-Andersson
Copy link

Hi. I tried to use your program but I got this error. How can I fix it? Thanks!

python C:\Program1\DeepImageAnalogy-master\main.py -g 0 --A_PATH Hellim1.png --BP_PATH HXMGA1.png

Traceback (most recent call last):
File "C:\Program1\DeepImageAnalogy-master\main.py", line 50, in
assert torch.cuda.is_available()
AssertionError

@Kexiii
Copy link
Owner

Kexiii commented Apr 1, 2021

GPU is required to run the code as the VGG nets is used to extract image features, you can comment this line of code but it will run rather slow on CPU.
If your do have a GPU device, maybe you should check your pytorch GPU environment.

@Isak-Andersson
Copy link
Author

GPU is required to run the code as the VGG nets is used to extract image features, you can comment this line of code but it will run rather slow on CPU.
If your do have a GPU device, maybe you should check your pytorch GPU environment.

I have nvidia geforce rtx 2060 super (8gb vram). Is this enough? I don't know how to check my "pytorch GPU environment". Could yo give me a link or an explanation of how to do that? Thanks.

@Kexiii
Copy link
Owner

Kexiii commented Apr 1, 2021

GPU is required to run the code as the VGG nets is used to extract image features, you can comment this line of code but it will run rather slow on CPU.
If your do have a GPU device, maybe you should check your pytorch GPU environment.

I have nvidia geforce rtx 2060 super (8gb vram). Is this enough? I don't know how to check my "pytorch GPU environment". Could yo give me a link or an explanation of how to do that? Thanks.

8GB vram is quite enough.
Just follow the official guide to install pytorch and cuda toolkit. I recommend using Anaconda as your python package manager, hope this helps

@Isak-Andersson
Copy link
Author

Just follow the official guide to install pytorch and cuda toolkit. I recommend using Anaconda as your python package manager, hope this helps

Right. I think I figured it out. Now I'm getting some other errors tho. Any ideas what the problem could be?

(base) C:\Program1\DeepImageAnalogy-master>python main.py -g 0 --A_PATH Hellim1_41x64.png --BP_PATH HXMGA1.png
====================CONFIG====================
{'alpha_2': [1.0, 0.8, 0.7, 0.6, 0.1, 0.0], 'alpha': [1.0, 0.9, 0.8, 0.7, 0.2, 0.0], 'nnf_patch_size': [3, 3, 3, 5, 5, 3], 'radii': [32, 6, 6, 4, 4, 2], 'lr': [1, 0.005, 0.005, 5e-05], 'resize_ratio': 0.5, 'params': {'layers': [29, 20, 11, 6, 1], 'iter': 10}}
====================CONFIG====================
Output Image Shape: (20, 32, 3)
Output Image Shape: (20, 32, 3)
====================Deep Image Analogy Alogrithm Start====================
Downloading: "https://s3-us-west-2.amazonaws.com/jcjohns-models/vgg19-d01eb7cb.pth" to C:\Users\Isak/.cache\torch\hub\checkpoints\vgg19-d01eb7cb.pth
Traceback (most recent call last):
File "main.py", line 51, in
main()
File "main.py", line 42, in main
img_AP,img_B = deep_image_analogy(A=img_A,BP=img_BP,config=config)
File "C:\Program1\DeepImageAnalogy-master\deep_image_analogy.py", line 23, in deep_image_analogy
model = VGG19()
File "C:\Program1\DeepImageAnalogy-master\VGG19.py", line 30, in init
vgg19_model.load_state_dict(model_zoo.load_url(pretrained_weights), strict=False)
File "C:\Program1\anaconda3\lib\site-packages\torch\hub.py", line 524, in load_state_dict_from_url
download_url_to_file(url, cached_file, hash_prefix, progress=progress)
File "C:\Program1\anaconda3\lib\site-packages\torch\hub.py", line 394, in download_url_to_file
u = urlopen(req)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 640, in http_response
response = self.parent.error(
File "C:\Program1\anaconda3\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 502, in _call_chain
result = func(*args)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

@Kexiii
Copy link
Owner

Kexiii commented Apr 2, 2021

Right. I think I figured it out. Now I'm getting some other errors tho. Any ideas what the problem could be?

(base) C:\Program1\DeepImageAnalogy-master>python main.py -g 0 --A_PATH Hellim1_41x64.png --BP_PATH HXMGA1.png
====================CONFIG====================
{'alpha_2': [1.0, 0.8, 0.7, 0.6, 0.1, 0.0], 'alpha': [1.0, 0.9, 0.8, 0.7, 0.2, 0.0], 'nnf_patch_size': [3, 3, 3, 5, 5, 3], 'radii': [32, 6, 6, 4, 4, 2], 'lr': [1, 0.005, 0.005, 5e-05], 'resize_ratio': 0.5, 'params': {'layers': [29, 20, 11, 6, 1], 'iter': 10}}
====================CONFIG====================
Output Image Shape: (20, 32, 3)
Output Image Shape: (20, 32, 3)
====================Deep Image Analogy Alogrithm Start====================
Downloading: "https://s3-us-west-2.amazonaws.com/jcjohns-models/vgg19-d01eb7cb.pth" to C:\Users\Isak/.cache\torch\hub\checkpoints\vgg19-d01eb7cb.pth
Traceback (most recent call last):
File "main.py", line 51, in
main()
File "main.py", line 42, in main
img_AP,img_B = deep_image_analogy(A=img_A,BP=img_BP,config=config)
File "C:\Program1\DeepImageAnalogy-master\deep_image_analogy.py", line 23, in deep_image_analogy
model = VGG19()
File "C:\Program1\DeepImageAnalogy-master\VGG19.py", line 30, in init
vgg19_model.load_state_dict(model_zoo.load_url(pretrained_weights), strict=False)
File "C:\Program1\anaconda3\lib\site-packages\torch\hub.py", line 524, in load_state_dict_from_url
download_url_to_file(url, cached_file, hash_prefix, progress=progress)
File "C:\Program1\anaconda3\lib\site-packages\torch\hub.py", line 394, in download_url_to_file
u = urlopen(req)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 531, in open
response = meth(req, response)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 640, in http_response
response = self.parent.error(
File "C:\Program1\anaconda3\lib\urllib\request.py", line 569, in error
return self._call_chain(*args)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 502, in _call_chain
result = func(*args)
File "C:\Program1\anaconda3\lib\urllib\request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

Since the code was tested about 3 years ago and it seems the pretrained weights provided by jcjohnson is no longer available, you can try to download the weights from here or update the code in VGG19.py
From:

class VGG19:
    def __init__(self):

        pretrained_weights = "https://s3-us-west-2.amazonaws.com/jcjohns-models/vgg19-d01eb7cb.pth"
        vgg19_model = models.vgg19(pretrained=False)
        vgg19_model.load_state_dict(model_zoo.load_url(pretrained_weights), strict=False)
        self.vgg19_features = vgg19_model.features
        self.model = FeatureExtractor()  # the new Feature extractor module network

To:

class VGG19:
    def __init__(self):

        vgg19_model = models.vgg19(pretrained=True)
        self.vgg19_features = vgg19_model.features
        self.model = FeatureExtractor()  # the new Feature extractor module network

Let me know if either works, sorry for the inconvenience.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants