IRE – Media processing (Part 4 of 7)

IRE

The special lenses of the Oculus Rift require the image to be distorted. This can be done with the official SDK. Its documentation is quite good, but luckily I’ve found the upcoming Oculus Rift in Action book with its nice source code examples. For my code I used the Hello Rift example as a base.

The application has two threads. On the main thread the OpenGL part happens – the camera image is loaded to the texture etc. The second thread is used for getting both camera images by OpenCV. To ensure that the two threads don’t mess each other up, a mutex and two conditional variables are used.

Camera thread

The cameras have two compression modes: YUV and MJPEG. With the standard OpenCV camera implementation, only YUV is available. For 60fps, however, the MJPEG mode is necessary. Therefore the DirectShow implementation in OpenCV has to be used. That can be accomplished by using a camera id from 700 – 799 and then setting FOURCC to MJPEG.

CvCapture* camLeft = cvCaptureFromCAM(700);
cvSetCaptureProperty(camLeft, CV_CAP_PROP_FOURCC, CV_FOURCC('M', 'J', 'P', 'G'));
cvSetCaptureProperty(camLeft, CV_CAP_PROP_FRAME_WIDTH, CAM_IMAGE_WIDTH);
cvSetCaptureProperty(camLeft, CV_CAP_PROP_FRAME_HEIGHT, CAM_IMAGE_HEIGHT);

CvCapture* camRight = cvCaptureFromCAM(701);
cvSetCaptureProperty(camRight, CV_CAP_PROP_FOURCC, CV_FOURCC('M', 'J', 'P', 'G'));
cvSetCaptureProperty(camRight, CV_CAP_PROP_FRAME_WIDTH, CAM_IMAGE_WIDTH);
cvSetCaptureProperty(camRight, CV_CAP_PROP_FRAME_HEIGHT, CAM_IMAGE_HEIGHT);

Our tests have shown that accessing the two cameras in separate threads results in worse performance than using one thread. In order for the camera thread to already request the new image while the current image is uploading to a graphics card in the main thread Area Phone Code 450 , the image is copied to dedicated image buffers.

// wait on draw thread
{
	unique_lock<mutex> lck(camMutex);
	while (!drawFinish && !terminateApp) { drawFinishCondition.wait(lck); }
}

imageLeft = cvQueryFrame(camLeft);
if (imageLeft) {
	memcpy(perEyeWriteImage[0]->imageData, imageLeft->imageData, CAM_IMAGE_HEIGHT * CAM_IMAGE_WIDTH * 3);
}
else{
	SAY("Didn't get image of left cam");
}

imageRight = cvQueryFrame(camRight);
if (imageRight) {
	memcpy(perEyeWriteImage[1]->imageData, imageRight->imageData, CAM_IMAGE_HEIGHT * CAM_IMAGE_WIDTH * 3);
}
else {
	SAY("Didn't get image of right cam");
}

long now = Platform::elapsedMillis();
{
	unique_lock<mutex> lck(camMutex);
	drawFinish = false;
	camFinish = true;
	camFinishCondition.notify_all();
}

Main thread

For each image there are two buffers. The pointers of them are swapped in the main thread as soon as the camera thread is finished. That way, it can be ensured that no screen tearing occurs. After the swapping, the camera thread is told to resume.

// wait for camera thread
{
	unique_lock<mutex> lck(camMutex);

	while (!camFinish){
		camFinishCondition.wait(lck);
	}
}

// swap image pointers
for (int i = 0; i < 2; ++i)
{
	IplImage* tempImage = perEyeReadImage[i];
	perEyeReadImage[i] = perEyeWriteImage[i];
	perEyeWriteImage[i] = tempImage;
}

// notify camera thread to resume
{
	unique_lock<mutex> lck(camMutex);
	camFinish = false;
	drawFinish = true;
	drawFinishCondition.notify_all();
}

In the meantime, the visible part of image is uploaded to the image OpenGL texture. Then the texture is mapped on a rectangle. This rectangle will be positioned and zoomed as necessary.

// only load necessary part of image
glPixelStorei(GL_UNPACK_ROW_LENGTH, CAM_IMAGE_WIDTH);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 189);

// bind and load camera image
imageTextures[eye]->bind();
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, RENDER_IMAGE_WIDTH, RENDER_IMAGE_HEIGHT, GL_BGR, GL_UNSIGNED_BYTE, perEyeReadImage[eye]->imageData);

glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, 0);

Finally the Oculus Rift SDK distorts the stereo image and the finished frame will be displayed on the screen.

You can find the whole code on GitHub.

Oh, and the audio: At the moment the microphone will just pass through to the HDMI with standard Windows settings.

The next article will cover the gyrating camera rig resp. head.

If you have any suggestions or questions, please use the comment form. I am always happy to learn something new.