-
Notifications
You must be signed in to change notification settings - Fork 91
deal with stale deployment references better #1590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
"stale path" in what regards? How does cleaning space crash your update? Is this related to the issue that you just reopened #1589 |
Yes. In certain way. Here some app needs to be uploaded and pointed to a path in a host to be deployed. Instead search files at stage admin's folder for this deployed app this tool searches for the path where it was uploaded. Then it crashes because I need to clean up space and upload folder was cleared to keep disk space consumption above 70%, for example. Maybe I need to review some rules for deployments here and I did it but it was tricky to understand update script functionality was crashing because of that. There was not related message linking the cause of lack of file. Just some comment to upload logs to git and open an issue. Maybe messages could be improved to be more informative but when it checks for all deployments to create a hash and those files are not there anymore it crashes. It is an ok functionality but logs are not pointing it in stdout. Assuming I need to update my workflow file management it will be a problem in a future because old files must be cleaned to release space and every time it cleans an old file I need to get it from stage back to this old path again in order to not break this deployment. If I did an automation of app deployments it is not good to check all if I need to send just one deployment and it crashes dealing with not related app. On the summary: Can logs could be more informative about stales paths crashing? |
Hello alansenairj - I have a few questions about your issue.
It would be helpful if you could provide us with the full logs from the deployApps tool, and any relevant information from the WDT model being deployed. Thanks. |
Hi rakillen, of course. Let me address this doubts and try to explain it as better as I can.
This issue was occurring because of stdout messages are not debugging absence of deployment files. I think it needs to create a hash and it searches for this files there but error message is not capable to give the "lack of deployment file" info, then log directs me to create an issue in github, as message points to do. This is the main cause of why I firstly opened this issue . Maybe if you could add any phrase there at logs like: "check if you have all deployment files in deployment paths" or something will be very helpful to debugging issues.
I am not running deployApps script anymore because of deprecation message. I am using update script instead. I am doing it using an ansible playbook to get it done and it is working well. It is deploying in online mode and deployed app files are ".war" type. It is updating and working well.
My playbook did some steps because of Firewall layer 7 is blocking uploading files from console. I need to upload files to WLS because console upload button not working because of that. So we are deploying apps from WL server's file system, not from remote files to there. Did you got it? So it creates a backup of old deployment, then it creates a new folder and copy file at /opt/deploy/ at same admin server filesystem. It is using a dev single machine cluster environment and we will put it to validate at 2 machine hml cluster latter until put it to production at the end. this ansible playbook did this :
It is working very well.
I have find/rm script type in crontab to keep filesystem clean. I did some automation to avoid lack of space warnings. So, backup is cleaned. Deployment is not cleaned. I can understand better the WDT process now. But using WDT at first time was tricky to understand why it breaks a lot until I put each stale ".war" files back and until each stale path stops to complain about file absence at logs it passes through domain verification WDT script and works. I do know how to deal with this situation because we are implementing this automation playbook at first time. It is working really well and I think we need to improve rules here to manage our weblogic servers to keep this playbook running. I don't know. Any advices are very welcomed. The problem was this dependency of files not related to deployment itself was crashing. It crashes because it checks every each domain deployed app file running or not running there at server JVM and I think WDT could be more smart dealing with this lacking of files, if it could be possible to be. If I am doing only one app deployment why it needs to break all? It is not tooling proof. Maybe I can ignore other deployments and did it more straight forward and check only this file which I am deploying now. Maybe it could be better to work with automations.
There are total of 7 apps there. One of these app was stooped and not working anymore. But we need to keep it there. This is not the WDT fault. It was our admin management stuff.
As I said I am using updateApps. I can send you more logs of course. But now I fixed all those stale paths issues and I got any errors in this week. I did about 10 deployments until last reply and it is working fine. Maybe If there is a way to ignore errors from stale paths will did this WDT tool more smarter to deal with tooling processes and deployments, I don't know. It is just an user humble suggestion. This tool is fantastic by the way. It gave us ability to get rid of console manual deployments. |
Hello alansenairj - Thanks for the information. It sounds like your problem has been resolved by adding steps to your playbook to restore the application .war files before running WDT updateDomain. Is there something remaining that needs to be fixed in WDT? |
No. thanx. I will close this issue. |
Are there some way to deal with stale path from apps? If I need to clean some space it crashes my update.
The text was updated successfully, but these errors were encountered: