I've had this in the back of my head about doing this for quite awhile, since v3 supports only iOS 5 and up I thought it would be a good first test for me of v3. I'm proposing this to replace the current implementation.
I rewrote CCRenderTexture to run on a shared memory CVPixelBuffer backed texture what this allows is fast (much faster than glReadPixels) reading from the renderTexture, but it also allow writing back to it, read from do some pixel effects write it back to the buffer and it displays. Only works on device, not on simulator, it's a weakness of the simulator but all the normal renderBuffer like everything you can do with the current version still work.
Here's the project and tests https://dl.dropboxusercontent.com/u/29620311/PixelBuffer.zip
--Fast reading of texture, good for pixel perfect collisions, color under touch etc.
--Faster than glReadPixels but also saves the memory spike of having to create another buffer the same size as the original texture just to read the data. It already exists and you just use the exact same texture array existing in a shared memory buffer both cpu and gnu can access.
-- Pixel effects you can read the data do an effect like grayscale for instance write it back and it's baked into the texture.
-- This is pretty cool, you can now run core image CIFilter effects straight on the render buffer (demo included). So all the built in effects that come with core image can now run on your sprite (if it's backed by a render texture). Made it easy to chain filter effects so they run faster.
-- Current saving to uiimage uses glReadPixels so this should be faster and save at least 33% of the temporary memory spike from creating two extra buffers in addition to the one already in memory.
The demos I've put in this test:
--Access the pixels through a 1d array - fast
--Access the pixels through a 2d array - fast
--Access the pixels through convenience methods getPixel and setPixel - slow but easy
--Find the color under a touch of the renderTexture
--Creating and running core image filters on the texture. Single effect and using an array of effects that get concatenated by core image and run faster than single effect after single effect.
Still needs a little testing.
--What I need is someone to test on an iPad air 64 bit should be ok because its all cast to GluByte but needs tested.
--Dealloc is not getting called but doesn't show any leaks, this is my first time using ARC, so I'm having a hard time finding why. If someone knows more about ARC, if they look through to see if I'm doing something wrong. Do I need __bridge on the CV… objects in initWithWidth? any ideas welcome.
--Couldn't test depth and stencil buffer extension still work because I honestly have no idea how to use those.
Tell me what you think, improvements ideas problems