Manual Camera on iPhone

19 Mar 2012

Today I have succeeded at streaming video from camera and showing it on the screen. Also, I have succeeded at manipulating pixels before showing it.

The normal way is to:

  1. Write your own `AVCaptureVideoDataOutputSampleBufferDelegate`
  2. Get `CVImageBuffer, which is from ``CVPixelBuffer```
  3. Manipulate pixels at their addresses
  4. Create `CVImageReffrom ``CVImageBuffer```
  5. setContents of the `CALayerwith your ``CVImageRef```

This is a normal way. You may also channel `CVImageBufferRef` to OpenGL in order to manipulate pixels with OpenGL Shader.

The vital technique is not to copy anything but to manipulate the pixels at their addresses directly.

It seems to me that using OpenGL shader is a better idea because:

  1. It is run on GPU. This means it does not consume CPU time.
  2. It is optimized for parallel stuffs.

Give it a kudos