-
Notifications
You must be signed in to change notification settings - Fork 30
Model for a mixture of two homopolymers (case 0 in rpa model) #689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…or PSD/PVME mixture described by Hammouda
|
You don't need both a C and a python implementation of the model with different names in the standard model set. I suggest keeping the python model, but update it to use a vectorized debye helper function to preserve precision. Code (untested): import numpy as np
def debye(X):
# Taylor expansion around zero:
# 2*(e^-X - 1 + X)/X^2 ≈ 1 - X/3 + X^2/12 - X^3/60 + ...
taylor_index = X < 1e-3
result = np.empty_like(X, dtype='d')
result[taylor_index] = np.polyval([-1/60, 1/12, -1/3, 1], X[taylor_index])
Xp = X[~taylor_index]
result[~taylor_index] = 2.0 * (np.expm1(-Xp) + Xp) / (Xp * Xp)
return resultIf we don't mind adding a sasmodels dependency then we could code debye in numba (untested): import numba
@numba.vectorize(['float64(float64)', 'double(double)'], target='parallel')
def debye(X):
# Taylor expansion around zero:
# 2*(e^-X - 1 + X)/X^2 ≈ 1 - X/3 + X^2/12 - X^3/60 + ...
if X < 1e-3:
return 1. + X*(-1./3. + X*(1./12. + X*(-1./60.)))
else:
return 2.0 * (np.expm1(-X) + X) / (X * X)Training user on numba will be easier than training them to vectorize conditional expressions. And the numba version will probably be faster. |
|
That's fine for me. Then the two points to decide over are:
|
|
I've done a functionality test of rpa_binary_mixture_homopolymers.py using v5.0.6 (because of the broken plugin capability in v6.x) on W11/x64. LGTM. In terms of parameterisation, I think there is an argument for both use cases; ie, those I use in binary_blend.py and those @gonzalezma uses here. The parameters I used are, I would argue, more meaningful to an experimentalist (you can look up the mass density on the sample bottle or in the Polymer Handbook, you know the monomer Mw and can measure the total Mw which gives you the No. of repeats, you can measure an Rg, etc). The parameters in the original RPA model, reimplemented here, are kind of one step removed for the experimentalist, but arguably more accessible to modellers. Also, an Rg measured experimentally inherently includes the coil expansion factor (due to solvency) whereas the parameterisation here is a little more 'theoretical', if I can put it like that. However, having said that I realise this potentially proliferates the number of models to be written! 😄 On the question of implementing all the original RPA models individually, this would at least give Users full access to all those models in what I suspect would be a shorter time than fixing the GUI selection issue? |
smk78
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to approve this, though requires the plugin functionality in v6.x to be fixed to be of use.
|
Without looking that closely yet, my usual first question: How will this effect people using sasmodels scripts? In other words will the scripts no longer work with this version of sasmodels? If not, is that a problem? I think we do not want to have multiple parameterizations of the same model in the sasmodels distribution. Mainly because that is a slippery slope. That said, we may eventually have to? Meanwhile I guess my question would be whether both parameterizations are equally general? There is also the question of original derivation in literature I suppose as well as well as which may be more commonly used? This does speak to establishing a "sasmodels" working group to vet these kind of things and make recommendations? Finally vectorizing seems de rigueur. I'm not sure if this is the time to open up the numba discussion or not though? We do already have a numba dependency in sasview (sascalc) so it would make some sense? But we really need to fix the problem of loading numba which makes the first call to pieces that use it look like the application has crashed. |
This PR is related to SasView issues #2243 and #3695 but it does not address them. It addresses only the comment about the RPA model in #2243 saying that "As far as the rpa model is concerned, the consensus at the 2024 Contributor Camp roadmap discussion was that this model should not be re-implemented in the form it once was (ie, using a control variable to select the calculation case). Instead, each case should be implemented as a separate model".
The PR introduces two versions of the RPA model for the 1st case (binary mixture of homopolymers): one in C that keeps the same input parameters as the rpa model and the python version that Steve uploaded into the marketplace, which uses alternative parameters (mass density and molecular weight vs molar volume, radius of gyration vs monomer length, and scattering length densities vs scattering lengths), so I think it is interesting to keep both.
Both models will appear in a new category named Polymers.
The advantage of splitting the rpa cases is that for each case we can have a simpler code and give the user a better documentation for the corresponding case, with some physical defaults and examples.
The main disadvantage is the multiplication of models having a lot of common code.
If we still agree with the consensus of 2024 Contributor Camp I will go ahead with rewriting the other cases (option 1). If, on the other hand, we decide to keep the single rpa model covering all the cases (option 2), then we need to fix #2243 (as at present the current UI makes very hard to use it) and also check the code, as I did not get the same result as with the models committed in this PR (which agree between them). My own vote would go for option 1.