Welcome back. This time we met via Skype to discuss next steps. Ronny has been busy starting to implement our component system.
So what does that mean? You can now look at an object in an a-frame test scene and you can grab it with a gaze or a click on your computer. And you can drop it on another object. Just some 40 lines of code, but our very first two defined components. More to come. The goal is to enable content producers to enhance objects in scenes for explaining stuff or telling stories.
Let me give you contextual info on why that matters. Actually “giants like Facebook, Google and Microsoft as well as startups like Sketchfab are racing to provide useful services that turn the Web 3D.” (*)
Facebook has just announced to embrace “a new type of content for you to upload alongside text, photos and videos. 3D objects and scenes saved in the industry standard glTF 2.0 format can be dragged straight to a browser window to add to your Facebook account ” (*)
Some of that is not necessarily new. Stereoscopy, a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision, has been around for ages.
But 3D models in the digital age can do way more than this nice little trick to get people excited for a couple of seconds on Facebook. The already mentioned industry standard glTF does the trick. We introduced it last week, and have seen it popping up here and there.
We offer glTF as the standard export for all downloadable models in addition to the original file format that was uploaded. That’s over 100,000 models currently available for free download under Creative Commons licensing.
And that is just Sketchfab. We want to make these objects easily useable in a scene and add components. Actually you can add more components if you like or need. It will be open source.
Turn the Web 3D
3D objects can and will change and enhance the ways in which we tell stories on the good old web. Especially when you take these objects and put them somewhere (like on to the real reality).
This can very easily be done in your mobile browser right now. For example on an iPhone.
But there is more. The iPhone X for example has introduced 3D head tracking (face tracking) using its TrueDepth camera. This allows for inferring the 3D position of the users eye. How cool is that.
And again. Up until now we are just talking mobile usage on a phone. Just imagine what can be possible when you are wearing mixed reality glasses. You will be able to add 3D objects to the real reality. You might not even be able to distinguish them from the real world. Check out Mica.
First things first
Okay, back to xrdok. We are aiming for easy stuff in the beginning. On a technical level Ronny will start and add external 3d models to our test scene and just fiddle around a bit with AR. Surely we want to tackle VR first.
We already started thinking about proper test scenes that represent scenarios that are useful for content producers. We are totally up for discussion and will run several hackathons to let you play around with the tool when we are there.