The process of getting a scene rendered by Ogre to the Oculus Rift is a bit envolved process. With a basic conaissance of Ogre, and trials and error while browsing the Ogre wiki, Documentation and source code itself I got the thing runing each time Oculus changed the way it worked.
Since we are in the version 0.8 of the SDK, and that 1.0 will come with probably not much change in this front, I think I can write some sort of guide, while browing my Ogre powered VR game engine, and tell you the story of how it works, step by step.
I’ll paste here some code with explaination. It’s not structured into classes because I don’t know how you want to do. I don’t use the Ogre Application framework because I want to choose myself the order where things happen
Before we dive: some warning and considerations:
This post is only documenting how I made it work for my projects. There’s probably a cleaner way to implement it, but this is what I did to make something that run.
This is using Ogre 1.9 and the Oculus SDK 0.8.0.0-beta. For more recent versions of theses libraries, this may not apply.
_Also, I’m using Ogre’s RenderSystem_GL and not RenderSystem_GL3+. I probably should use the GL3+ but I started using the old one with the fixed pipeline. It shouldn’t change the way that woks fundamentally. I’m getting a bit more familliar with OpenGL Core-profile programming, and I think I will update my engine to the GL3+ renderer. This probably need to use Ogre RTSS to generate shader program for the old fixed pipelin function, or to write GLSL form the ground up. And I didn’t want to mess with the OpenGL pipeline here.
_Last warning before we go: They are some direct GL call that only GL 4.3 (glCopyImageSubData). You need your graphics card and graphics driver to support OpenGL 4.3+ (And If you have hardware that is pwerfull enough to use the Rift, you will. We also need to include and use glew to make OpenGL call.
So, I started making a little game engine for Virtual Reality (and I mean by that : just for messing with an Oculus Rift devkit). And since I didn’t wanted to get my hands too much dirty, I avoided writo,g graphics code directly, and Ogre seemed a nice solution for “not having to do everything myself while only using free software and wirting my own C++ code” so I started with that.
I followed along the differents changes made by Oculus on the way (they have a tendency to break their own API each time they can). From the time that it was just an extension of your desktop to the whay they do it now.
Long story short: In the few latest SDK revisions, they introduced something called the Oculus Compositor. The user have to install a “runtime” that contains several drivers and other goodies, including a background service that talk to the rift. They also worked with GPU vendors (Nvidia and AMD) to integrate a more direct way to access a screen at low level through their drivers (without having to hack into Windows’s graphic stack). And they implemented something called”Direct Driver mode” that doesn’t show the rift as a regular computer screen anymore.
They idea is simple : The Oculus runtime talks to the rift and your application has no direct access to it. You get every information you need from the runtime (head/eye position, size of display, etc.) and you submit frame content to update.
The Oculus Compositor has “layers” you can put stuff on. Some layer are made for images and 2D HUD (to render them at a higher) resolution, and some are for your 3D environement (ovrLayerType_EyeFov ). This is the only thing we are interested in today.
The Oculus Rift screen in in front of both user’s eye. The left part of it is for the left eye (same goes for the right part). The goal is to put there a distorted image that match the inverse of the lenses distortion to fill the user’s feild of view with the desired image. The lenses have a really high maginfication, but this introduce heavy distortion and chromatic aberation that have to be corrected. It has to be rendered for the imediate position/orientation of the player.
The distortion calculation and the chromatic aberation correction is done by the Oculus Compositor itself (int he past, they gave the shaders code to do it). We just need to put our rendered scene at the good position/orientation and feild of view.
The feild of view of a camera is basically definded by what we call a projection matrix. The Oculus SDK can calculate each eye’s projection matrix. We obviously need to create two cameras to render the scene for two eyes.
The tricky part is to give the rendered image to the Oculus Rift compositor. The way it’s intended to work is to request a render texture form the Oculus Compositor, and render to it (we do what we call “RTT” rendering (Render To Texuture)).
The only problem is that there’s no easy way to make ogre use a texture that hasn’t been created by it’s own TextureManager. The work-around is to render on an Ogre internal texture and to copy the result to the Oculus texture. Hopefully this can be done directly on the GPU memory without costing much processing power.
To get information about what Ogre is doing behind the scene, we will need access to the RenderSystem_GL in our code, this means two important things:
- The program will not run with another RenderSystem “plugin” initialized
- We need to include the headers of the RenderSystem’s componant we need to access
So, we will assume (or we will load manualy) that it’s RenderSystem_GL that is the RenderSystem used by Ogre, and so we will permit ourself to do crazy things like this :
Here, for example, we cast the TextureManager to a GLTextureManager. Since we use the RenderSysem_GL, the instance of the TextureManager that is instanciated is, in fact a GLTextureManager (yay! polymorphism!)
So, I’m not giving a peice of code that works “out of the box” to do it. I have a slightly older implementation somewhere on GitHub that I will have to update. But I will explain what to do to acheive it, like step by step. Without actually structuring the code into good looking, usable and correctly named classes…
Everything regarding the Oculus SDK that is used here can be found here: https://developer.oculus.com/documentation/pcsdk/latest/
The first thing we want to do is to initialize the Oculus SDK. The C++ SDK is presented as a library you have to link to your code to use. This library is mostly in good old C (and not C++) and call everything that belongs to itself with the prefix “ovr” (for “Oculus Vritual Reality”), and is called “LibOVR”.
Some components (all the maths stuff actually) have C++ classes for them. They are englobed into a namespace called “OVR”. I will assume that the directive using namespace OVR; has not been called so I will write OVR:: where it’s needed.
So, we need a bunch of varaibles and objects to hold what we get from libOVR. We will declare everything usefull here.
//This represent the HMD (Head Mounted Display)
//This is a structure that contain every parameter of the used HMD
//Tracking state : the headset position&orientation for a given instant
Next, we can do the initialization:
ovrResult r = ovr_Create (&Hmd, &luid);
if(r != ovrSuccess)
std::cerr << "Error: Cannot get HMD";
L"Can't find any Oculus HMD!\n\n(Please note that if you want to use this program\
without an Oculus Rift, you NEED to activate the \"debug hmd\" setting \
on the Oculus runtime configuration utility)",
L"Error, No Oculus HDM found!",
HmdDesc = ovr_GetHmdDesc(Hmd);
(MessageBox is a function from Windows.h . The main cause of ovr_Create failing is that there’s no Oculus Rift attached on the system. You can tell the oculus configuration utility to use a “debug hmd”. It will then accept to initialize like there was a DK1 or DK2 plugged on USB and HDMI.
At this stade, we can display any information about the current connected headset. Here’s an example:
std::cerr << "================================================" << endl;
std::cerr << "Detected Oculus Rift device :" << endl;
std::cerr << "Product name : " << HmdDesc.ProductName << endl;
std::cerr << "Serial number : " << HmdDesc.SerialNumber << endl;
std::cerr << "Manufacturer : " << HmdDesc.Manufacturer << endl;
std::cerr << "Display Resolution : " << HmdDesc.Resolution.w << "x" << hmdDesc.Resolution.h << endl;
std::cerr << "Type of HMD identifier : " << HmdDesc.Type << endl;
std::cerr << "Firmware version : " << HmdDesc.FirmwareMajor << "." << hmdDesc.FirmwareMinor << endl;
std::cerr << "================================================" << endl;
If you want to directly get position/orientation information from the headset you can even do :
//Update the current tracking state
ts = ovr_GetTrackingState(Hmd, time, true);
//Get Pose information
//You can even extract (yaw, pitch, roll) euler angles from the quaternion
float o_y, o_p, o_r;
orientation.GetEulerAngles<Axis_Y, Axis_X, Axis_Z>(&o_y, &o_p, &o_r);
A “pose” is the a term used to call the user’s point of view on the VR scene. It contains information like it’s head position and orientation.
Now, let’s talk about manually initializing Ogre. I’m not a fan of the Ogre’s application framework, and I prefer to do stuff my way. That’s why I decided to show you the steps you need to render to the rift with Ogre’s OpenGL renderer and not organizing this code into classes.
We need to include Ogre and some other componants form the Oculus library at this point:
//Header of the RenderSystem_GL
We need to store pointers to Ogre’s objects:
///Ogre Root instance
///Ogre Render Window for debuging out
///Ogre Scene Manager
Ogre::SceneManager* smgr, * debugSmgr;
///Stereoscopic camera array. Indexes are "left" and "right" + debug view cam
Ogre::Camera* cams, * debugCam;
///Nodes for the debug scene
Ogre::SceneNode* debugCamNode, * debugPlaneNode;
///Node that store camera position/orientation
///Vewports on textures. Textures are separated. One vieport for each textures
Ogre::Viewport* vpts, *debugViewport;
///The Z axis clipping planes distances
Ogre::Real nearClippingDistance, farClippingDistance;
note: everything with the “debug” prefix in it’s name is only used to show the mirrored texture form the HMD to the window. The program doesn’t render to the window it creates but to a texture (We will do RTT rendering, and we need to setup a texure as a render buffer, and to put camera viewports on it)
We also need to a bunch of variable to comunicate with the Oculus Rift :
///Fov descriptor for each eye. Indexes are "left" and "right"
///Render descriptor for each eye. Indexes are "left" and "right"
///Size of left eye texture
ovrSizei texSizeL, texSizeR, bufferSize;
///OpenGL Texture ID of the mirror texture buffers
GLuint oculusMirrorTextureID, ogreMirrorTextureID;
///Compositing layer for the rendered scene
///GL texture set for the rendering
///GL Texture ID of the render texture
///offcet between render center and camera (for IPD variation)
///Timing in seconds
double currentFrimeDisplayTime, lastFrameDisplayTime;
///Current eye getting updated
///Orientation of the headset
///Position of the headset
///Pointer to the layer to be submited
///State of the Oculus performance HUD
A few things here are present as an array of two elements. To make the code a little nicer, I propose to declare an enume that assocate “left” to index 0 and “right” to 1
left = 0,
right = 1
We can now start initializing Ogre :
//Create the ogre root with standards Ogre configuration file
root = new Ogre::Root("", "ogre.cfg");
//Note that I'm not using the plugin cfg file. You can. But since I want to
//hard-code the use of RenderSystem_GL, I init the plugins I want manually here :
root->setRenderSystem(root->getRenderSystemByName("OpenGL Rendering Subsystem"));
//false tells Ogre not to manually create a window.
Next we create a window. We still want a window so it’s easy to get events from the system (if you use OIS for example, you want to pass it the HWND of the window and keep it focused). The window itself is not usefull for rendereing for the rift, but displaying the debug (mirrored) view from the headset on it is usefull during developpement or demonstration. (Since only one person can wear the headset, and you probably won’t want to put it and remove it each change you do on your code). The Oculus can provide an OpenGL texture with a copy of the Rift content called the mirror texture. We will use it on our render window
//The Oculus Compositor is V-Synced. We do not want to wait for the render window to v-sync.
misc["vsync"] = "false";
float w(HmdDesc.Resolution.w), h(HmdDesc.Resolution.h);
//I use a 1920x1080 screen for developping. I don't want this window to be any larger
//so I divide the size if it's a DK2 or something bigger
if(w >= 1920) w /=2;
if(h >= 1080) h /=2;
std::sting name("My Oculus VR App with Ogre")
//Create the window
window = root->createRenderWindow(name + ": Mirror output (Please put your headset)", w, h, false, &misc);</pre>
Ogre keeps every object on the 3D inside a tree (graph with no cycles) called the Scene Manager. We need to create at least one scene manager to do anything usefull with Ogre. There are multiple types. I manually loader the “OctreeSceneManager” plugin earlier, so it’s time to use it!
smgr = root->createSceneManager("OctreeSceneManager", "OSM_SMGR");
Since I want to use my window as a debug output, I will create another scene manager, add a Quad on it with a 16:9 aspect ratio, put the mirrored texture on it and render it to the window with an orthographic projection and no lighting what so ever.
//Create the scene manager for the debug output
debugSmgr = root->createSceneManager(Ogre::ST_GENERIC);
debugSmgr->setAmbientLight(Ogre::ColourValue::White); //no shadow
//Create debug the camera
debugCam = debugSmgr->createCamera("DebugRender");
float X(16), Y(9);
//Add it to scene
debugCamNode = debugSmgr->getRootSceneNode()->createChildSceneNode();
//--------------Create the debug plane
//We manually create a Quad, with 4 vertices (2 triangles), and asign texture coordinates to each corner of a 2D plane
debugPlaneNode = debugCamNode->createChildSceneNode();
//Just put some distance between the camera and the play by moving it on -Z
//Create the manual object
Ogre::ManualObject* debugPlane = debugSmgr->createManualObject("DebugPlane");
//Create a Material
DebugPlaneMaterial = Ogre::MaterialManager::getSingleton().create("DebugPlaneMaterial", "General", true);
debugTexturePlane = DebugPlaneMaterial.getPointer()->getTechnique(0)->getPass(0)->createTextureUnitState();
//The manual object itself, with the material as a TriangleStrip
//4 verticies with texture coodinates
float x(X/2), y(Y/2);
debugPlane->position(-x, y, 0);
debugPlane->position(x, y, 0);
debugPlane->position(x, -y, 0);
//Add it to the scene
We can create the render camera. We also add a “Camera Controll Node” that you can expose to move the base PoV of the scene :
cams[left] = smgr->createCamera("lcam");
cams[right] = smgr->createCamera("rcam");
//do NOT attach camera to this node...
CameraNode = smgr->getRootSceneNode()->createChildSceneNode();
Now we need to create OpenGL textures inside Ogre that will match the render buffer created for us by the Oculus SDK. Just before that, I would like to init Glew to be able to access OpenGL functions
//Init GLEW here to be able to call OpenGL functions
GLenum err = glewInit();
if(err != GLEW_OK)
std::cerr << "Failed to glewTnit()\nCannot call manual OpenGL\nError Code : " + (unsigned int)err << std::endl;
Now we know we need to ask the Oculus Runtime to create for us the rendering texture set. It’s an array of texture we have to write our render output on. We will put both eyes on the same texture, so we will create one large enough for each eye.
We want the maximum coverage of the user’s FoV so we will ask for it
//Get texture size from ovr with the maximal FOV for each eye
texSizeL = ovr_GetFovTextureSize(Hmd, ovrEye_Left, HmdDesc.MaxEyeFov[left], 1.0f);
texSizeR = ovr_GetFovTextureSize(Hmd, ovrEye_Right, HmdDesc.MaxEyeFov[right], 1.0f);
//Calculate the render buffer size for both eyes
bufferSize.w = texSizeL.w + texSizeR.w;
bufferSize.h = std::max(texSizeL.h, texSizeR.h);
We will use the “bufferSize” to create the texture
//Request the creation of an OpenGL swap texture set from the Oculus Library
if (ovr_CreateSwapTextureSetGL(Hmd), GL_SRGB8_ALPHA8 , bufferSize.w, bufferSize.h, &textureSet) != ovrSuccess)
//If we can't get the textures, there is no point trying more.
std::cerr << "Cannot create Oculus swap texture" << std::endl;
We create an equivalent texture inside of Ogre. We will give it the name “RttTex” just to make it easier to fetch it back
//Create the Ogre equivalent of the texture as a render target for Ogre
Ogre::TexturePtr rtt_texture(textureManager->createManual("RttTex", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME,
Ogre::TEX_TYPE_2D, bufferSize.w, bufferSize.h, 0, Ogre::PF_R8G8B8, Ogre::TU_RENDERTARGET));
Ogre::RenderTexture* rttEyes = rtt_texture->getBuffer(0, 0)->getRenderTarget()
(Note taht we are sure we called the createManual() implemented by Ogre::GLTextureManager)
Now, we need to get the OpenGL TextureID of this texture. The Ogre::GLTexture has a “getGLID()” method that return the GLuint we want:
Ogre::GLTexture* gltex = static_cast<Ogre::GLTexture*>(Ogre::GLTextureManager::getSingleton().getByName("RttTex").getPointer());
renderTextureID = gltex->getGLID();
We can also add your render viewport to this texture
vpts[left] = rttEyes->addViewport(cams[left], 0, 0, 0, 0.5f);
vpts[right] = rttEyes->addViewport(cams[right], 1, 0.5f, 0, 0.5f);
Same goes for the mirror texture
if (ovr_CreateMirrorTextureGL(Hmd, GL_SRGB8_ALPHA8 , HmdDesc.Resolution.w,
HmdDesc.Resolution.h, &mirrorTexture) != ovrSuccess)
//If for some weird reason (stars alignment, dragons, northen gods, reaper invasion) we can't create the mirror texture
std::cerr << "Cannot create Oculus mirror texture" << std::endl;
//Create the Ogre equivalent of this buffer
Ogre::TexturePtr mirror_texture(textureManager->createManual("MirrorTex", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME,
Ogre::TEX_TYPE_2D, getHmdDesc.Resolution.w, HmdDesc.Resolution.h, 0, Ogre::PF_R8G8B8, Ogre::TU_RENDERTARGET));
//Save the GL texture id for updating the mirror texture
ogreMirrorTextureID = static_cast<Ogre::GLTexture*>(Ogre::GLTextureManager::getSingleton().getByName("MirrorTex").getPointer())->getGLID();
oculusMirrorTextureID = ((ovrGLTexture*)mirrorTexture)->OGL.TexId;
We can also configure the our debuging output the same way we did by adding a viewport from the debug camera to the windo. We can also set the mirror texture to the debug plane’s material
//Attach the camera of the debug render scene to a viewport on the actuall application window
debugViewport = window->addViewport(debugCam);
debugTexturePlane->setTextureFiltering(Ogre::FO_POINT, Ogre::FO_POINT, Ogre::FO_NONE);
Now, we can configure the oculus compositor to know what we put on the texture. We will create a single layer with what we render in 3D on it. If you want to put text or other things on top of it, you can add more layers, according to the Oculus documentation.
//Populate OVR structures
EyeRenderDesc[left] = ovr_GetRenderDesc(Hmd, ovrEye_Left, HmdDesc.MaxEyeFov[left]);
EyeRenderDesc[right] = ovr_GetRenderDesc(Hmd, ovrEye_Right, HmdDesc.MaxEyeFov[right]);
//Create a layer with our single swaptexture on it. Each side is an eye.
layer.Header.Type = ovrLayerType_EyeFov;
layer.Header.Flags = 0;
layer.ColorTexture[left] = textureSet;
layer.ColorTexture[right] = textureSet;
layer.Fov[left] = EyeRenderDesc[left].Fov;
layer.Fov[right] = EyeRenderDesc[right].Fov;
layer.Viewport[left] = Recti(0, 0, bufferSize.w/2, bufferSize.h);
layer.Viewport[right] = Recti(bufferSize.w/2, 0, bufferSize.w/2, bufferSize.h);
//Get projection matrices for each eye:
for(size_t eyeIndex(0); eyeIndex < ovrEye_Count; eyeIndex++)
//Get the projection matrix
OVR::Matrix4f proj = ovrMatrix4f_Projection(EyeRenderDesc[eyeIndex].Fov,
//Convert it to Ogre matrix
for(size_t x(0); x < 4; x++)
for(size_t y(0); y < 4; y++)
OgreProj[x][y] = proj.M[x][y];
//Set the matrix
//Make sure that the perf hud will not show up by himself...
perfHudMode = ovrPerfHud_Off;
ovr_SetInt(Hmd, "PerfHudMode", perfHudMode);
Now, everything is in place. For each frame of the game you need to
- Increment the index of the Oculus Texture Set
- Get window event, if you don’t pump window messaging, Windows will think your program is not responsive and will ask to close it after some time;
- Update the position of cams[left] and cams[right] accordign to data from the oculus SDK
- Updtate the two viewports on the render texture (vpts[left/right])
- Copy the rendered image to the Oculus Texture Set
- Submit the frame
- Copy back the mirror texture to Ogre
- Update the Window
I tend to want to separate the tracking update to the render part since I may want to use the Oculus Position/Orientation inside my gameplay.
For updating the camera :
//Get current camera base information
cameraPosition = CameraNode->getPosition();
cameraOrientation = CameraNode->getOrientation();
//Begin frame - get timing
lastFrameDisplayTime = currentFrimeDisplayTime;
ts = ovr_GetTrackingState(Hmd, currentFrimeDisplayTime = ovr_GetPredictedDisplayTime(Hmd, 0), ovrTrue);
updateTime = currentFrimeDisplayTime - lastFrameDisplayTime;
//Get the pose
pose = ts.HeadPose.ThePose;
ovr_CalcEyePoses(pose, offset, layer.RenderPose);
//Get the hmd orientation
oculusOrient = pose.Rotation;
oculusPos = pose.Translation;
//Apply pose to the two cameras
for(size_t eye = 0; eye < ovrEye_Count; eye++)
//cameraOrientation and cameraPosition are the player position/orientation on the space
cams[eye]->setOrientation(cameraOrientation * Ogre::Quaternion(oculusOrient.w, oculusOrient.x, oculusOrient.y, oculusOrient.z));
(cameraPosition //the "gameplay" position of player's avatar head
+ (cams[eye]->getOrientation() * Ogre::Vector3( //realword camera orientation + the
EyeRenderDesc[eye].HmdToEyeViewOffset.x, //view adjust vector.
EyeRenderDesc[eye].HmdToEyeViewOffset.y, //The translations has to occur in function of the current head orientation.
EyeRenderDesc[eye].HmdToEyeViewOffset.z) //That's why just multiply by the quaternion we just calculated.
+ cameraOrientation * Ogre::Vector3( //cameraOrientation is in fact the direction the avatar is facing expressed as an Ogre::Quaternion
And for actually render and submit the frame :
//Select the current render texture (for this frame)
textureSet->CurrentIndex = (textureSet->CurrentIndex + 1) % textureSet->TextureCount;
//Update the viewports
root->_fireFrameRenderingQueued(); //Some events inside ogre are not performed if this is not called.
//Copy the rendered image to the Oculus Swap Texture
glCopyImageSubData(renderTextureID, GL_TEXTURE_2D, 0, 0, 0, 0,
((ovrGLTexture*)(&textureSet->Textures[textureSet->CurrentIndex]))->OGL.TexId, GL_TEXTURE_2D, 0, 0, 0, 0,
//Get the rendering layer
layers = &layer.Header;
//Submit the frame
ovr_SubmitFrame(Hmd, 0, nullptr, &layers, 1);
//Put the mirrored view available for OGRE
glCopyImageSubData(oculusMirrorTextureID, GL_TEXTURE_2D, 0, 0, 0, 0,
ogreMirrorTextureID, GL_TEXTURE_2D, 0, 0, 0, 0,
HmdDesc.Resolution.w, getHmdDesc.Resolution.h, 1);
//Update the render mirrored view
It’s not that complicated but it is a fairly long chunk of code to acheive this
This code can be find on this repository (once I update it) https://github.com/Ybalrid/ogre-oculus-opengl. If you have any questions, there’s the comment section below. 😉