Skip to content

Floating point error in Matrix.invert() #1

Open
@hal7df

Description

@hal7df

Both of the (valid) unit tests for Matrix.invert() require an epsilon scale factor of 1e10 (i.e. only accurate to 5-6 decimal places), which is undesirable. This should be investigated to find a solution for increasing the accuracy of the inversion operation.

Curiously, the rref() method seems to be accurate to 14 decimal places, yet the inverse it produces is less accurate. Is this a side effect of the algorithm/is there a better matrix inversion algorithm to use?

Here's an example from one of the test cases:

Actual:
image

Computed:
image

Observed in Chromium 60.0.3112.78 on Ubuntu 16.04

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions