mdlm.noise_mdlm
ContinousTimeNoiseSchedule
Bases: Module, NoiseSchedule
Base class for continuous time noise schedules for absorbing diffusion.
For absorbing diffusion in continuous time, we only need $\sigma(t)$ and the integral $\dot\sigma(t)$, which we call noise_rate and total_noise respectively.
Note
We assume that for continous time, $t \in [0, 1]$.
__init__(antithetic_sampling=True, importance_sampling=False, grad=False, eps=0.001)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
antithetic_sampling
|
bool
|
If true, the sampled time steps in a batch are sampled around points of a uniform grid over [0, 1], insted of sampling directly from a uniform distribution over [0, 1]. |
True
|
importance_sampling
|
bool
|
The goal is have a desired distribution over the noise level $\sigma$, sampling of $t$ is just a way of obtaining a value of $\sigma$. Since $\sigma(t)$ is non-linear function of $t$, if we want to have a desired distribution over $\sigma$ for training, which is indeed the case, we cannot simply sample $t$ uniformly and then transform it to $\sigma(t)$. Setting importance_sampling=True, will sample uniformly directly over $\sigma$ in the range $[\sigma_{ ext{min}}, \sigma_{ ext{max}}]$. |
False
|
noise_rate(t)
Return the noise level at time t.
total_noise(t)
Return the total noise at time t.
t_from_noise_rate(noise_rate)
Return the time step t from the noise level sigma.
t_from_total_noise(total_noise)
Return the time step t from the total noise.
forward(t)
Return the noise level at time t.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
t
|
Float[Tensor, ' batch']
|
The time step tensor of shape (batch) |
required |
Returns: The noise level tensor of shape (batch) The total noise tensor of shape (batch)
sample_t(batch_size, device=torch.device('cpu'))
Sample a t uniformly from [0, 1].