You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the paper, after passing the image through the Encoder, three feature maps of size 32x32 are obtained. After upsampling, the feature map size is 64x64, and PointFeatures are obtained through the Triplane Decoder. I would like to know what determines the number of PointFeatures input into the MLP and what the image size is after rendering.
The text was updated successfully, but these errors were encountered:
According to the paper, after passing the image through the Encoder, three feature maps of size 32x32 are obtained. After upsampling, the feature map size is 64x64, and PointFeatures are obtained through the Triplane Decoder. I would like to know what determines the number of PointFeatures input into the MLP and what the image size is after rendering.
The text was updated successfully, but these errors were encountered: