Week 06

Hi! We have been busy doing some research, playing around with stuff and actually we have been trying to keep track of all the stuff that is currently happening in our field of interest. 

Looks pretty straight forward

One day after our last blogpost Mozilla beta released Spoke:

Spoke lets you quickly take all the amazing 3D content from across the web from sites like Sketchfab and Google Poly and compose it into a custom scene with your own personal touch. You can also use your own 3D models, exported as glTF. The scenes you create can be published, shared, and used in Hubs in just a few clicks. It takes as little as 5 minutes to create a scene and meet up with others in VR.

The social aspect is obviously key here. I tried to meet my colleague Guy who was sitting next to me in a workshop in order to check the trippy tunnel. We failed miserably. Maybe it was us. Or the user experience is not yet there. Visiting the Palace of Versailles was not that stunning either. Nevertheless it is great idea how easy it is to add additional 3D content. Curious to see how things will evolve. 

The torch i hold is always a flame

I had way more fun playing around with the stunning AR app Torch – a prototyping, content creations, and collaboration platform for the AR and VR cloud. Obviously a good way to start prototyping too.

Use Cases

And the AR wayfinding demo is really cool! What else? Besides playing around with tools, we spent more time thinking about use cases. Obviously we are not the only ones. And this is where a talk by Michael Hoffmann (Object Theory) comes in handy. 

18

There might be more, nevertheless this is a good structured approach in order to find out which scenarios might best be suited for our project. 

Project Aero

And there was something else. Adobe announced the private beta of Project Aero, an augmented reality authoring tool and content delivery platform for smartphones. Project Aero will allow creatives to make augmented reality artifacts without the need to learn how to use things like the Unity game engine.

So what does that mean? Adobe Creative Cloud applications like Photoshop and Dimensions will be able to make AR artifacts from things like image, video, or PSD files become part of the physical world around you, so you can walk through the layers of a PSD file, for example. Project Aero uses the USDZ file format for Apple’s ARKit.

USDZ or glTF

USDZ has Apple, Pixar, and Adobe behind it, but glTF has Microsoft, Google, andFacebook. And, contrary to Apple’s usual MO, USDZ is actually an open format, just like glTF is. Time will tell which format will become the industry standard, but our money is on glTF.

War or Peace

Linda and Stephan do prefer peace

One more thing: We showed our Macke VR prototype in collaboration with @DeutscheWelle at the War or Peace Conference. Our @MIZBabelsberg co-funded project @xrdok will enable journalists to build such prototypes themselves. #warorpeace

Week 03

Playing around with Poly

Welcome back. This time we met via Skype to discuss next steps. Ronny has been busy starting to implement our component system.

So what does that mean? You can now look at an object in an a-frame test scene and you can grab it with a gaze or a click on your computer. And you can drop it on another object. Just some 40 lines of code, but our very first two defined components. More to come. The goal is to enable content producers to enhance objects in scenes for explaining stuff or telling stories.

Spatial Storytelling

Let me give you contextual info on why that matters. Actually “giants like Facebook, Google and Microsoft as well as startups like Sketchfab are racing to provide useful services that turn the Web 3D.” (*)

Facebook has just announced to embrace “a new type of content for you to upload alongside text, photos and videos. 3D objects and scenes saved in the industry standard glTF 2.0 format can be dragged straight to a browser window to add to your Facebook account ” (*)

Some of that is not necessarily new. Stereoscopy, a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision, has been around for ages.

But 3D models in the digital age can do way more than this nice little trick to get people excited for a couple of seconds on Facebook. The already mentioned industry standard glTF does the trick. We introduced it last week, and have seen it popping up here and there

We offer glTF as the standard export for all downloadable models in addition to the original file format that was uploaded. That’s over 100,000 models currently available for free download under Creative Commons licensing.

And that is just Sketchfab. We want to make these objects easily useable in a scene and add components. Actually you can add more components if you like or need. It will be open source.

Turn the Web 3D

People did not get the idea about scrolling. This image explains that text is not suddenly lost just because it is not seeable anymore when you scroll 

3D objects can and will change and enhance the ways in which we tell stories on the good old web. Especially when you take these objects and put them somewhere (like on to the real reality).

This can very easily be done in your mobile browser right now. For example on an iPhone

Try https://shop.magnolia.com/collections/arkit

But there is more. The iPhone X for example has introduced 3D head tracking (face tracking) using its TrueDepth camera. This allows for inferring the 3D position of the users eye. How cool is that.

Try https://itunes.apple.com/de/app/theparallaxview/id1352818700?mt=8

And again. Up until now we are just talking mobile usage on a phone. Just imagine what can be possible when you are wearing mixed reality glasses. You will be able to add 3D objects to the real reality. You might not even be able to distinguish them from the real world. Check out Mica

First things first

Okay, back to xrdok. We are aiming for easy stuff in the beginning. On a technical level Ronny will start and add external 3d models to our test scene and just fiddle around a bit with AR. Surely we want to tackle VR first. 

We already started thinking about proper test scenes that represent scenarios that are useful for content producers. We are totally up for discussion and will run several hackathons to let you play around with the tool when we are there. 

Week 01

Ronny playing around with A Frame

We just had our internal kickoff at MIZ Babelsberg. Discussing milestones, talking about the MVP (minimal viable product), events to come in the weeks and months ahead (including hackathons) and – of course – in the center of it all: we started shaping a shared product vision. 

We agreed on building a very first scene in the next week including two objects and one action in order to derive a proper working plan. 

What the heck are we actually aiming for

If you are asking yourself what this is all about, let me try to clarify. We are deeply convinced that we are slowly transitioning to a new computer 
paradigm that will not necessarily be dominated by smartphones. 

2018 marks the beginning of the end of traditional smartphones. During the next decade, we will start to transition to the next era of computing and connected devices, which we will wear and will command using our voices, gesture and touch. Amy Webb

VR, AR – currently we prefer the term XR – will shape the way in which we will consume content, information and stories. On mobile phones (for a start), on head mounted displays (that will look better and better and become lighter and lighter) and on contact lenses or elsewhere (maybe someday).

All this will at the same time, feel as if it takes forever and happen very very fast. Not fast enough for some (*) but it is actually happening (*) right now: New forms and technical possibilities to tell stories and to transfer knowledge.

XR Dok – What can i do

Our open source toolkit will allow content producers to combine CGI-objects with actions in spaces. For example? Imagine a 3d-model of a car that can be annotated with text and information. When you are writing an article on a specific car, just add a 3d-model, that users can look at from various perspectives. This works in VR-mode, but it could certainly work in AR-mode as well. Just park the car on your table. 

XR Dok will amongst other things make use of Open Street map. And it will allow integration. You as content producer will have the chance to built a story in CGI very easily with our toolkit. 

One of the major challenges in our project is to combine technical knowledge with content creation ideas for emerging media and to explain all  this to journalists, content producers, friends and family. 

WTF GLTF 

In order to learn something new all together, we will start throwing in things that some of us learne on the wax. Today: a new file format, called glTF

glTF (GL Transmission Format) is a file format for 3D scenes and models using the JSON standard. It is described by its creators as the “JPEG of 3D.”