-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Improve voxel memory usage via Texture3D
#13084
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Thank you for the pull request, @jjhembd! ✅ We can confirm we have a CLA on file for you. |
jjhembd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some review comments in key areas, and in places where I need feedback.
| } = options; | ||
|
|
||
| if (!context.webgl2) { | ||
| if (!context.webgl2 && !defined(context.options.getWebGLStub)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to allow a WebGL stub, since that is what we use for testing. Much of the following code can still be reasonably tested in a stub.
I'm open to feedback on where to allow the stub. Should we try to set a .webgl2 property on the stubbed context? Or, since WebGL2 is now the default, should we be switching to a context.webgl1 flag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect the stub would ideally be shaped like a WebGL2 context for forward-compatibility. Does that create other issues in tests? The !defined(context.options.getWebGLStub) check above also seems reasonable, if so, but I would just add a comment explaining why it's there (perhaps with a link to an issue) since that may not be obvious to the next reader.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Context.webgl2 flag is set as follows:
const webgl2 = webgl2Supported && glContext instanceof WebGL2RenderingContext;It will be hard to make the stub pass that instanceof check. This makes me think a replacement of the .webgl2 flag with a .webgl1 flag is the best route. However, that would be a breaking change... so maybe it's better to just accept that support for both WebGL1 and WebGL2 will be messy, but temporary (we will drop WebGL1 someday).
I added a comment clarifying the check in Texture3D.
| Check.typeOf.number.greaterThan("width", width, 0); | ||
|
|
||
| if (width > ContextLimits.maximumTextureSize) { | ||
| if (width > ContextLimits.maximum3DTextureSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This appears to have been an oversight in #12611. It matters for large textures: maximum3DTextureSize tends to be smaller than maximumTextureSize.
| * } | ||
| * }); | ||
| */ | ||
| Texture3D.prototype.copyFrom = function (options) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Compare to Texture.copyFrom. The Texture3D version here supports fewer source data types.
| Check.typeOf.bool("nearestSampling", nearestSampling); | ||
| //>>includeEnd('debug'); | ||
| if (this._nearestSampling === nearestSampling) { | ||
| return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The texture.sampler setter makes some GL calls, so when possible, we exit early to avoid calling it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I understand why avoiding GL calls would be preferable here, is it likely that this setting would change frequently? Or it makes testing more difficult? Would it make sense to have a Sampler.LINEAR preset as well, and avoid Sampler construction entirely here without needing to check an internal cache?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The .nearestSampling setter is called every frame, from VoxelPrimitive.prototype.update. The redirection from a setter to a per-frame update method is how we debounce user input. Then, by checking for changes, we avoid extra GL calls every frame.
I suppose we could instead check for change in the VoxelPrimitive.prototype.nearestSampling setter--let me know if you think that would make more sense. My initial thought was that nearestSampling is really a texture thing, so it made sense to do the state / change tracking from the class that is closer to the texture.
A Sampler.LINEAR preset might make sense. It's less obvious to me what the default edge conditions should be--many linearly sampled textures might use wrapping rather than clamping? If so, we would almost need to include the wrap conditions in the name, i.e., Sampler.LINEAR_CLAMPED ?
| * @returns {Cartesian3} The computed 3D texture dimensions. | ||
| */ | ||
| Megatexture.getApproximateTextureMemoryByteLength = function ( | ||
| Megatexture.get3DTextureDimension = function ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is not used outside the class, but we expose it for testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly worth adding this as a comment on the method and/or adding a JSDoc @ignore hint? OK with me either way though, I see the class itself is private.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a comment and a @private tag. I know the tag is technically redundant, but it's to communicate the intent to the reader. I don't see much use of @ignore elsewhere in the repo.
| inputDimensions, | ||
| types, | ||
| componentTypes, | ||
| // TODO: refine this. If provider.maximumTileCount is not defined, we will always allocate 512 MB per metadata property. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeking feedback here. What if the tileset has 10 metadata properties? This would allocate 5 GB. Should we default to a total memory per tileset?
|
|
||
| it("shows texture memory allocation statistic", function () { | ||
| expect(traversal.textureMemoryByteLength).toBe(textureMemoryByteLength); | ||
| expect(traversal.textureMemoryByteLength).toBe(32); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new allocation is smaller if the data is smaller than the suggested byte length
donmccurdy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still reading through things, but just a couple initial comments!
|
For context on the matter of WebGL2/WebGL1 support—While we're not likely to fully deprecated WebGL 1 support in the near future, our implicit policy is that newer features like voxels don't need backwards compatibility with WebGL 1. The rational is that we default to WebGL 2 at this point and we want to be able to take advantage of newer features available to us like 3D textures. Additionally, the voxel APIs in particular is marked as "experimental" which means they are subject to breaking changes without deprecation. (The latter is documented in our Coding Guide. Perhaps we should also add a note about out WebGL 1/2 "policy".) |
donmccurdy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When loading the "Voxel Picking" demo and selecting the "Cylinder" dataset, I'm seeing a small rendering difference, near the center of the cylinder:
Before
After
After moving the mouse around a bit, the new version updates to match the old version, so possibly something isn't being fully updated during initialization?
Similarly there is a rendering difference on the "Voxel Rendering" example, a change in transparency and interpolation, though it seems more plausible this might be intended:
Before
After
| } = options; | ||
|
|
||
| if (!context.webgl2) { | ||
| if (!context.webgl2 && !defined(context.options.getWebGLStub)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect the stub would ideally be shaped like a WebGL2 context for forward-compatibility. Does that create other issues in tests? The !defined(context.options.getWebGLStub) check above also seems reasonable, if so, but I would just add a comment explaining why it's there (perhaps with a link to an issue) since that may not be obvious to the next reader.
| if (PixelFormat.isCompressedFormat(this._pixelFormat)) { | ||
| throw new DeveloperError( | ||
| "Cannot call copyFrom with a compressed texture pixel format.", | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question — I believe this is just currently unsupported, and doesn't represent a technical blocker or a decision not to support compressed formats. Would it be worth hinting at that in the comment in case a user runs into it and is willing to try a PR?
| if (PixelFormat.isCompressedFormat(this._pixelFormat)) { | |
| throw new DeveloperError( | |
| "Cannot call copyFrom with a compressed texture pixel format.", | |
| ); | |
| } | |
| if (PixelFormat.isCompressedFormat(this._pixelFormat)) { | |
| throw new DeveloperError( | |
| "Unsupported copyFrom with a compressed texture pixel format.", | |
| ); | |
| } |
| channelCount, | ||
| componentType, | ||
| availableTextureMemoryBytes, | ||
| availableTextureMemoryBytes = 134217728, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps worth keeping the explanation of the default from below:
| availableTextureMemoryBytes = 134217728, | |
| availableTextureMemoryBytes = 134217728, // 1024x1024 @ 128bpp |
| Check.typeOf.bool("nearestSampling", nearestSampling); | ||
| //>>includeEnd('debug'); | ||
| if (this._nearestSampling === nearestSampling) { | ||
| return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I understand why avoiding GL calls would be preferable here, is it likely that this setting would change frequently? Or it makes testing more difficult? Would it make sense to have a Sampler.LINEAR preset as well, and avoid Sampler construction entirely here without needing to check an internal cache?
| * @returns {Cartesian3} The computed 3D texture dimensions. | ||
| */ | ||
| Megatexture.getApproximateTextureMemoryByteLength = function ( | ||
| Megatexture.get3DTextureDimension = function ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly worth adding this as a comment on the method and/or adding a JSDoc @ignore hint? OK with me either way though, I see the class itself is private.
| // Find a nearby number with no prime factor larger than 7. | ||
| const factors = findFactorsOfNearbyComposite(tileCount, maxTileCount); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious why small prime factors are preferable over power-of-two dimensions, or arbitrary integers? It makes sense to me that limiting ourselves to square textures could increase memory but I'm less sure about this part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"power-of-two dimensions" normally refers to the actual pixel size of the texture. This proves to be very memory-inefficient in 3D. For example, suppose we have a single-tile dataset with pixel dimensions 17x17x17 (total 4,913 pixels). A power-of-two restriction would allocate 32x32x32, or 32,768 pixels, over 6x as much memory as is necessary.
This PR removes the power-of-two restriction on the actual pixel size, allowing single-tile datasets to allocate exactly the required amount of memory.
The "small prime factor" restriction applies to the number of tiles. Suppose we have a dataset with 9 tiles (for example, a simple octree with 2 levels of detail). If we required the number of tiles to be a power of two, we would have to allocate enough memory for 16 tiles, or almost double the actual need. The code in this PR would allocate space for 10 tiles (assuming the tiles are too large to fit all of them in a 1x1x9 stack), of which 10% would be "wasted"/unused. We could theoretically allocate a texture that would fit a "pancake" of 3x3x1 tiles, but the current code is too simplistic to find that solution. (Only numbers with at most one non-power-of-two factor are considered.)
The current setup for the tile count can be thought of as "power-of-two" with additional options in between powers of two, computed as 5/4, 6/4, and 7/4 the size of the previous power of two. For example, for a dataset with between 8 and 16 tiles, we can select from the options (8, 10, 12, 14, 16). With this setup, the maximum allocation will be 5/4 the size of the input data, or 25% more than necessary.
We could in theory add more options. The series (9/8, 10/8, 11/8, 12/8, 13/8, 14/8, 15/8) would reduce the maximum excess allocation to 12.5% more than necessary. Or we could allow arbitrary integers, which would eliminate all excess allocation. However, then we would run into problems packing large prime numbers into a box with no dimension exceeding GL_MAX_3D_TEXTURE_SIZE. For example, suppose the input has 13 tiles, each with pixel dimensions 32x32x32. The only way to pack these into a box exactly is to define a box that is 1x1x13 tiles (32x32x416 pixels). However, some contexts may not allow a 3D texture with more than 256 pixels along a side—see GL_MAX_3D_TEXTURE_SIZE in glGet.
In the above 13-tile case, if GL_MAX_3D_TEXTURE_SIZE is the minimum 256, this PR would allocate a 1x2x7 tile texture (32x64x224 pixels), which is one tile larger than the input data, but fits within the maximum dimensions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know, it's none of my business, but I hope that it's not toooo distracting: I was curious and pulled that function into a sandcastle, and tried to feed it with some of these numbers...
To my understanding, it's putting "three and a half" tiles along one dimension. Did I miss anything here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for checking, @javagl! That was clearly a bug in the getDimensionFromFactors helper, which would have been an issue in contexts with small maximum dimension, with small tilesets/small memory limits, where the desired tile count may include only one factor of two. I just pushed a fix. Here is an updated Sandcastle demonstrating the fixed behavior.
The current code can still be broken if GL_MAX_3D_TEXTURE_SIZE is small and the size of each tile is large. However, it should throw an error with a reasonably clear error message in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Admittedly, I didn't do the math of this factor computation, e.g. what exactly is done there with these primes (and why...). And I'm lacking some context (e.g. was surprised that this bytesPerSample was only relevant for the total size, and I don't know what "usual" tile dimensions are, and what usual maximum texture sizes are...).
But I was somewhat curious: Given that the tile sizes are all the same, I think that this might not even an NP-complete problem.
First, I thought that one could compute the "wasted space" somehow directly. But the maximum texture size, the different tile dimensions, and the resulting (different!) maximum numbers of tiles along each dimension make this more tricky.
I still think that one could map this to something that can be solved deterministically, or at least approximated very well with standard algorithms. Specifically: I have a veeery vague feeling that there is some "shortest path computation" hidden in that.
But I didn't go to the level of pen+paper right now. As a "quicker shot", I thought that ~"some greedy approach" could do it. These can be pretty good for this sort of problem, and often only deliver "bad" results for "very unusual/extreme" configurations. But ... (coming back to the lack of context:) ... I don't know what "common" or "uncommon" configurations are.
I casually hacked around a bit: I created one exhaustive search (searching the configuration that minimizes the "wasted space"), and an additional simple "greedy" one (that just greedily fills the available space based on the sorted dimensions), and added some corresponding "test functions" around that. It's not really productive in some way, and certainly doesn't count as "Cesium time" - rather some recreational thing - but will dump the result here...
(This also contains the original function, but still the one without the fix)
As the name runSingleTest suggests: I considered to add more test cases, with a mix of extreme tile sizes like (1,1,128) and un-extreme ones like (16,16,16), and of course, some tileCount values that are either prime (like 1259) or highly composite (like 1260 - yeah...).
But ... it's ~2am here after all 🤓
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that I roughly understand why that case is not working, but may not fully understand why certain limitations exist.
And as mentioned above: I don't know which configurations really appear in practice. So all this is no reason to hold up this PR. (In the hope that when an issue is opened for some error message like ~"WebGL error: negative texture size", it will be possible to 1. quickly zoom to the relevant code part, 2. find out why a certain case caused a negative texture size, and 3. fix it as a appropriate).
In the last comment, I mentioned the 'greedy2' result, but not what this does: It starts with the maximum number of tiles that do fit (along each dimension), and then always decreases the tile count by 1 (along one dimension) that causes the greatest reduction in "wasted space". It is guaranteed to find a solution (if one exists), often finds the ideal solution, often one that is better than the current one, and I haven't seen a case where it finds one that is worse, but I didn't really (really really) systematically, exhaustively, and fairly compare things here. (For cases like a maximum size of 2048, and a single tile of size 1x1x1, would mean that it decrements the size 2047+2047+2047 times, but maybe that's not an issue either).
The core of the approach (other approaches and sanity checks omitted here):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @javagl, I think your 'greedy2' approach is both simpler and better.
As for the number of decrement iterations, I think we can use some geometric hand-waving to limit it to a 3-count loop:
- We start from a "cube" which is filled as full as possible with tiles. The number of tiles along a given axis is inversely proportional to the dimension of one tile along that axis.
- The dimension along which a decrement will cause 'the greatest reduction in "wasted space"' is the dimension along which the tile size is largest. A "slice" perpendicular to this axis will necessarily have a larger number of tiles than a "slice" across either of the other axes.
- After removing one "slice", the "slice" which will cause 'the greatest reduction' is still in the same dimension. That "slice" has not changed in size; while the other "slices" have only gotten smaller.
- We can therefore compute the number of "slices" to remove from each axis sequentially, in a single operation per axis.
Here is a Sandcastle demonstrating the optimized 'greedy2' approach.
It is less obvious to me to quantify just how "good" this approach is. My previous attempt had a known bound on the amount of wasted space, when the maximum 3D texture size was not an issue. When bumping up against the maximum texture size, however, it would sometimes allocate a texture that was much smaller than ideal, because the factorization approach didn't tend to find cubical (or nearly cubical) shapes.
In every example that I can think of, the 'greedy2' approach will find a solution that is equivalent or better. But I don't know how to prove this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "greedy2", in the posted form, was certainly not optimized in any way! It was rather "brainstorming in code". What you suggested (basically, determining beforehand how many of these next[i] -= 1;'s there will be, and doing them in a systematic order (i.e. not "just trying around")) sounds reasonable, and as far as I understood, should always give the same result.
Any proof - on a mathematical level - about "how good" the result will be is difficult.
(Anecdodal: I once did a 1-hour presentation about https://www.sciencedirect.com/science/article/pii/S0166218X0500377X , so... maybe we'll first have to proof on whether the problem that we're dealing with actually does have a polynomial-time constant-factor approximation algorithm?)
One of the challenging parts for that would be the two (independent!) limits, namely, the maximum texture size and the available memory. So ... I'm a nerd with some affinity to mathematics, but there's still enough "engineer" in that mix that I'd say "Let's try out". If this was relevant, I'd run that (messy) Sandcastle that literally dumped CSV to the console, with even more configurations, and add a column that contains the original- and greedy2 results, divided by the "exhaustive" result, look at the min/max/average of these columns, and see whether anything stands out.
Is it relevant? Well, I'll probably do this anyhow - not today or tomorrow, but maybe in ~"the next few days". I just don't know whether this should affect this PR in any way...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After a bit (too little) sleep, and a quick look at the actual sandcastle code, I'm not sure whether this will always deliver the same result. It only modifies each dimension once. I think that there could be cases where the "best" dimension changes during that process. Imagine something in 2D that is close to a square. The process could be
- cut off a bit along x, because that's the best (basically because sizeY>sizeX)
- cut off a bit along x, because that's still the best
- now cutting along y is better than along x (because sizeY<sizeX) - so cut off a bit along y
- cutting along y is still the best - cut off a bit along y
- now, cutting along x is better again - so cut off along x
- ...
but... that was just an unverified thought/gut feeling behind the original "greedy2". Maybe this can not happen. I'll have to check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran a few more tests, and will just I'll dump the result here:
Most relevant points:
- There doesn't seem to be a difference between "greedy2" and your optimized version of the "greedy2", which is called "greedy3" here
- Some key numbers below the table:
- Maximum waste:
- Original: 0.12
- Greedy: 0.12
- Exhaustive: 0.024
- Average waste*:
- Original: 0.0164
- Greedy: 0.0072
- Exhaustive: 0.0008
- Maximum waste:
* The "average" doesn't really make sense, given that the test configurations are completely arbitrary (I could have picked configurations where 'all but one' configuration has a waste of 0 for all algorithms).
I'll probably leave it at that for now. The "summary" is that it looks like the "greedy2/3" is simpler (and easier to understand) than the original, and does never generate worse solutions, but in many cases generates better solutions. The tests are not "exhaustive" (and maybe don't even properly reflect "real-world configurations"), and of course, there is no formal proof for anything, whatsoever. But ... Experimental mathematics FTW 🙂
| if (Math.floor(n) !== n) { | ||
| throw new DeveloperError("n and maxN must be integers"); | ||
| } else if (n < 1) { | ||
| throw new DeveloperError("n and maxN must be at least 1"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to gate these checks behind a //>>includeStart('debug', pragmas.debug);? Asking more for my own understanding of CesiumJS best practice than an actual preference. :)
|
I'm not 100% sure I'm measuring the right thing, but opening the "Voxel Rendering" sandcastle and logging Is this expected and/or is there a better way to check total memory allocations? I haven't ventured into setting up something like webgl-memory in a sandcastle quite yet. |
See the "Testing Plan" of #12370 (I think that I did only use it in a sandcastle by "hacking" it into the local build, but think that this snippet there was supposed to work in an actual sandcastle) |
|
Thanks a lot for the feedback @donmccurdy! I think I addressed the code comments. For your testing results:
|
|
About (1), it might be LOD-related, yes. The viewport size is the same, though. Here's a screen capture comparing the PR to production. I'm not actually moving the camera, just hovering the mouse, so perhaps the picking operation is resetting some cache or GL state... voxel_picking_cylinder_delayed_update.webm |
Description
This PR improves memory efficiency in
VoxelPrimitiveby reworkingMegatextureto use theTexture3Dclass from #12611.Megatextures
Unlike
Cesium3DTilesetwhich renders each tile with separateDrawCommands, voxel tilesets are rendered in a single draw command for all tiles. This is necessary because of the raymarching approach: foreground tiles affect the rendering of background tiles. As a result, all tiles in the scene must be loaded into a single texture, via theMegatextureclass. Different tiles are assigned different regions within the sameMegatexture.Previous
MegatextureimplementationPrior to this PR, the
Megatextureimplementation was backed by a 2D texture. 3D voxel tiles were split into 2D slices, which were then arranged across the 2D texture. This structure madeVoxelPrimitivecompatible with WebGL1 contexts, but had some drawbacks:New
MegatextureimplementationThis PR reworks
Megatextureto useTexture3D, and removes restrictions on the size of the texture. This simplifies the shader code, since we can directly use built-in graphics APIs for both linear and nearest sampling. Also, the texture can be allocated to more closely fit the size of the actual data.How 3D Textures are sized
The data for each tile is like a 3D box, and the
Megatextureis like a bin into which we are packing the boxes. Bin packing in general is a hard optimization problem. The approach used here starts from some simplifying assumptions:All sizing is done based on the maximum number of tiles, which is either a value from the
VoxelProvider, or the available memory divided by the memory size of one tile.We first check for a special case: if all of the tiles can fit in a single 1x1xN stack, without the long dimension exceeding
GL_MAX_3D_TEXTURE_SIZE, then the texture is allocated to that size. This guarantees no wasted space, because this shape can be made to fit the tiles exactly. We make sure to stack the tiles along the smallest tile dimension, to increase the chances of achieving this optimal case.If more than one row of tiles is needed (
GL_MAX_3D_TEXTURE_SIZEtends to be smallish), we then proceed as follows:tileCount[i] * tileDimension[i] < GL_MAX_3D_TEXTURE_SIZEtextureSize[i] = tileCount[i] * tileDimension[i]Other changes
ContextLimitsdocumentation and specs to assume the default WebGL2ContextLimits.maximum3DTextureSizeTexture3D.prototype.copyFrommethod, following the similar method fromTextureVoxelsSandcastle exampleIssue number and link
Resolves #12570
Testing plan
Run all specs locally. Changes in
ShaderBuildershould not affect non-voxel rendering, but we should verify this.Load all voxel-related Sandcastles, and verify:
Author checklist
CONTRIBUTORS.mdCHANGES.mdwith a short summary of my change