The plugin I'm asking this question about distorts one image based on another (still or sequence). The distortion image must be 32-bit floating depth or can get by with 16-bit at reduced accuracy in some cases. I don't want to have to force users (as I do now) to bump the entire project up to floating or 16-bit depth, due to the longer processing, storage time, and limited/modified effects functionality.
How can I obtain a 32-bit version of a layer? (Which is typically only bare footage) Note that this mean that the footage must be read at full resolution, NOT be subject to double-conversion, ie from 32-bit to 8-bit and then back to 32-bit, for example.
I'm looking for a known-good solution from Adobe, not something untested that may work (but is equally likely to double-convert, for example).
In the ideal world, I'd specify the desired bit depth as an additional argument in PF_RenderRequest when checking out the layer in Smart PreRender --- one of those "unused' chars, for example, would do just fine if everything supported that (ie with the default 0 meaning the project's bit depth).
Thanks.