Understanding SoftclipTransform #10
-
DescriptionI'm trying to limit the output domain of a normalizing flow. I noticed that there is a ReproduceThis is what I have tried: class MyFlow(zuko.flows.FlowModule):
def __init__(
self,
features: int,
context: int = 0,
transforms: int = 3,
randperm: bool = False,
**kwargs,
):
orders = [
torch.arange(features),
torch.flipud(torch.arange(features)),
]
transforms = [
zuko.flows.MaskedAutoregressiveTransform(
features=features,
context=context,
order=torch.randperm(features) if randperm else orders[i % 2],
**kwargs,
)
for i in range(transforms)
]
base = zuko.flows.Unconditional(
zuko.distributions.DiagNormal,
torch.zeros(features),
torch.ones(features),
buffer=True,
)
transforms.append(zuko.flows.Unconditional(zuko.transforms.SoftclipTransform))
super().__init__(transforms, base)
Expected behaviorI would expect the samples to be contrained in the [-5, 5] domain, since that is the default argument of SoftClipTransform. I have two questions:
flow = MyFlow(1, 1, transforms=5, hidden_features=[50])
samples = flow(torch.rand(1)).sample((10000,))
plt.hist(samples.numpy(), bins=100);
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hello @arnauqb 👋
Therefore, if you want your samples to be bounded, the inverse of the first transformation The easiest solution is to insert the inverse class MyFlow(zuko.flows.MAF):
def __init__(self, *args, bound: float = 5.0, **kwargs):
super().__init__(*args, **kwargs)
self.transforms.insert(0, zuko.flows.Unconditional(self.inverse_softclip, bound=bound))
def inverse_softclip(self, bound: float = 5.0):
return zuko.transforms.SoftclipTransform(bound).inv However, I must warn you that this is dangerous. The softclip function maps >>> softclip = zuko.transforms.SoftclipTransform(5.0)
>>> softclip.inv(4.9)
... 245.00000000000117
>>> softclip.inv(4.99)
... 2494.9999999999977 This would lead to very high input variances during training, for which neural networks often break. To mitigate this issue, if your inputs lie in |
Beta Was this translation helpful? Give feedback.
Hello @arnauqb 👋
transforms
is the sequence of transformationsTherefore, if you want your samples to be bounded, the inverse of the first transformation$f_0^{-1}$ should be the $f_n$ the
SoftclipTransform
, but, in your snippet, you made the last transformationSoftclipTransform
.The easiest solution is to insert the inverse
SoftclipTransform
at the front of thetransforms
list.