Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Artificially improve arithmetic coding accuracy #4

Open
mitiko opened this issue Apr 23, 2023 · 0 comments
Open

Artificially improve arithmetic coding accuracy #4

mitiko opened this issue Apr 23, 2023 · 0 comments
Assignees

Comments

@mitiko
Copy link
Owner

mitiko commented Apr 23, 2023

It's a know fact arithmetic coders have a slight bias for 0s by default. This is usually small enough to go unnoticed (20-30 bytes).

Method 1 (preferred): Round on lerp

Simply shifting ceils results, rounding is more accurate.

let lerped_range = (range * p) >> 31;
let rounded_range = (lerped_range >> 1) + (lerped_range & 1);
let xmid = x1 + u32::try_from(rounded_range).unwrap();

Needs to be tested and proven (encode & decode work on any input).

Method 2: Swap bit positions

This one is a bit more complicated because the encoder and decoder must be synchronized.
The range calculations are actually decoupled from the bits themselves, which means, if we swap the 1s and 0s with a certain frequency we can offset the implicit bias.
For example, on every 3 consecutive same bits, we do a swap. Or on every 2 consecutive 0s and every 3 consecutive 1s..

match bit ^ swap {
    0 => self.x1 = xmid + 1,
    _ => self.x2 = xmid,
}
// ...
if condition() {
    swap ^= 1;
}

The issue is this becomes a secondary model now. We don't want a model in the entropy coder.

Method 3: Modify the probability

We can statically modify the probability, for example add some percentage if it drops below 25%.
However, this again messes with the model's job. In a perfect world, the model gives perfectly accurate predictions, and any tinkering with them should result in entropy.
Perhaps the more general solution is letting the model offset the AC's bias. But then it is breaking it's single responsibility principle.. Hmm

@mitiko mitiko self-assigned this Apr 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant