Skip to content

Add support for double precision. #229

Draft
ru-imajion wants to merge 2 commits intojkuhlmann:masterfrom
ru-imajion:master
Draft

Add support for double precision. #229
ru-imajion wants to merge 2 commits intojkuhlmann:masterfrom
ru-imajion:master

Conversation

@ru-imajion
Copy link

This allows cgltf_float to be set to double. Resolves #228.

  • Adds a define GLTF_DOUBLE_PRECISION that when set sets cgltf_float to double instead of float.
  • Adds a flag to enable testing with ./test_all.py --double-precision

@zeux
Copy link
Contributor

zeux commented Oct 6, 2023

One subtlety that this is missing from #228 is cgltf_accessor_read_float / cgltf_accessor_unpack_floats. These should continue to use 32-bit float types, one because glTF data contained in accessors can't be double precision, two because these functions have fast paths that directly copy 32-bit floats.

We should also audit uses of cgltf_component_type_r_32f to double check that these don't use cgltf_float.

All of this could probably use cgltf_float32 to be explicit; standard float types are probably not a good idea once we have this configuration define.

@zeux
Copy link
Contributor

zeux commented Oct 7, 2023

Also cgltf_write's CGLTF_DECIMAL_DIG constant probably needs to be updated here.

@ru-imajion
Copy link
Author

cgltf_accessor_read_float looks ok to me. I think you may be right about cgltf_accessor_unpack_floats, all tests are passing now, so I'll create a test that fails it and come up with a patch.

cgltf_float32 is a fine idea, there seemed to already be an assumption that float was 32 bits so I went with it, but this is an easy change.

I need more time to understand CGLTF_DECIMAL_DIG a bit more, but you have a point here.

…loat or double.

CGLTF_DECIMAL_DIG is increased to DBL_DECIMAL_DIG or 17 when CGLTF_DOUBLE_PRECICION flag is set
Fixed memcpy issue in cgltf_accessor_unpack_floats when CGLTF_DOUBLE_PRECISION is set
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Double-precision transform hierarchy

3 participants