XR Hackathon II 25th February 2019

Hello again. We are running another Hackathon. Same place. Other time. We especially want to encourage Berlin-based journalists to join the ride: February 25th, 2019, 10 am. Medieninnovationszentrum Babelsberg, Stahnsdorfer Str. 107, 14482 Potsdam. Bring your laptop. We provide food and drinks. And do not forget to write a mail to Marcus in order to join. The event is free as in coffee.

And here is what we want to achieve: Building up on a fruitful brainstorming and very first implementations last time, we identified three major areas of interests plus one little thingy.

Beam me up, Scotty

We want to further delve into the chances and possibilities of using XR in the context of learning and knowledge transfer. And we want to figure out where using VR/AR/XR makes more sense compared to traditional forms of storytelling. VR can very quickly take you to places you normally could not go to. Let´s say the middle of the human body, the moon or a pyramid.

Our web-based framework lets you beam a user from one space to another. Or to be more precise here: From one sphere to another by for instance touching an object. So we could borrow a 3d model from Sketchfab or Poly (CC licence) like an ancient object, put that object in an empty space where you or your user could examine it by literally moving around it in the experience. By touching it, the sphere around you would change and for instance take you to ancient Greece. A personal interactive 3D cgi experience.

Time Travel

Another thing we would love to explore is putting layers of visual information on one another. Think of several 360 spheres taken from exactly the same spot but in various years. By implementing a simple slider you could “move in time”. Something similar is possible with Google Earth. But we think that it is not yet thought through. We have seen some crazy Mixed Reality demos on Twitter from a guy augmenting his room with an older image of exactly this room, but we want to dig deeper here.

Walking Knowledge – Spatial Storytelling

A very simple approach for non-coders could be to add textual information in a 3d space. Just think of a web-based data visualisation where you learn everything about a 100 meter sprint (Olympics or elsewhere) by just moving along a digital sports field. Walking is the new scrolling @Grahaphics said about AR. With XR Dok you can very easily build a VR version of that.

BER – Do i need to say more

Berlin is going to have a new airport. So they say. It has been a while. And it will certainly take more time. Before this building will ever open, we bring a digital 3d version of it to play around with it in VR. If the model will be done by February 25th that is.

Of course we are more than happy to discuss and/or prototype your idea. And please do not worry if code is not your friend. Our tool is aimed at journalists who know that HTML is an abbreviation but struggle to make use of the rest in their daily lives. Welcome to the club!


When? February 25th, 2019, 10 am. Where? Medieninnovationszentrum Babelsberg, Stahnsdorfer Str. 107, 14482 Potsdam. Bring your laptop. We provide food and drinks. What? Do not forget to write a mail to Marcus in order to join. The event is free as in coffee.

XRDOK is an open source XR toolkit for CGI-based content production. It lets users combine 3d objects (gltf) with actions (components) in scenes (VR, later AR) easily in order to tell stories or explain things.

Vragments is a Berlin-based AR/VR studio. We are programmers, journalists and designers who focus on innovative technologies. We help you get your projects out.

Hacking Time Travel

A test scene with some decent pink floor and cars misbehaving

We are huge fans of the concept of a Hackathon. You just need a bunch of enthusiastic people, a room and some food. And then you suddenly end up building strange stuff like a room in the middle of nowhere, equipped with office furniture. Why? In order to put objects on that table that let you travel through time and space when you touch them.

One guy working and the others watching in awe

That was one of the ideas we worked on at our Hackathon, January 11th at MIZ in Babelsberg. We were a bit afraid beforehand how the participants would embrace and make use of our framework. How glad we were when we heard someone say: they are definitely on the right track here.

Easy Use

All you need is a basic understanding of HTML to start playing around with our framework that lets users combine 3d objects (gltf) with actions (components) in scenes (VR, later AR) easily in order to tell stories or explain things. XRdok allows the creation of interactive VR projects built on A-frame. Our goal: to provide a library of components that provide functionality for specific use cases and are primed for easy use instead of abstraction.

Who? What? Why? Questions you should be able to answer

But a framework is one thing. The next question is: What to build with it? After a general input we started discussing ideas and agreed on answering three questions: Who are you in a VR experience? What do you do? And why is that? With these questions answered you are good to go because you have the foundation for a learning experience.

Especially the idea to touch or combine objects in order to move in space had its charm. Imagine to virtually touch an artefact that brings you back into a 360 scenario that immerses you into the historical context. All you need to start is a 360 photo that is used to create a scene and a bunch of objects that can be found on Sketchfab or Poly.

Personal interactive experience

The other scenario we started working on was all about re-creating a scene that can be observed from different angles. Because that is a huge advantage of CGI based 3D scenes. There is a “long tradition” in VR doing stuff like that with Nonny de la Pena and her team creating the immersive docu-game “Gone Gitmo” in 2007. Just imagine to be able to do something like that on a smaller scale but all by yourself in a couple of hours or days.

Crime scenes are recreated like that by the Institute of Forensic Medicine in Zurich and there is another journalistic example that lets users walk around the Ferguson shooting scene, where Michael Brown was shot dead by a police officer on 9 August 2014 in Ferguson, Missouri. We are curious to explore more in this context of interactive knowledge transfer.

Next Hackathon

We would dearly like to thank all participants and the MIZ for a great day. And we are already planning our next Hackathon. Just drop us a line if you are interested.

XR DOK Hackathon

It is happening. We are running a very first Hackathon to play around with our framework. Come around and join us. Be among the first to test XR Dok. Wether you are a journalist, programmer, graphic designer or just generally interested in the future of interactive immersive media. We want to talk, discuss, code and prototype with you.

January 11th, 2019, 10 am. Medieninnovationszentrum Babelsberg, Stahnsdorfer Str. 107, 14482 Potsdam. Bring your laptop. We provide food and drinks. And do not forget to write a mail to Marcus in order to join. The event is free as in coffee.

XRDOK is an open source XR toolkit for CGI-based content production. It lets users combine 3d objects (gltf) with actions (components) in scenes (VR, later AR) easily in order to tell stories or explain things.

Okay, again. Here is the important stuff: January 11th, 2019, 10 am. Stahnsdorfer Str. 107, 14482 Potsdam. Write a mail if you want to come around.

And here is a test-scene on glitch. Enjoy!

Week 06

Hi! We have been busy doing some research, playing around with stuff and actually we have been trying to keep track of all the stuff that is currently happening in our field of interest.

Looks pretty straight forward

One day after our last blogpost Mozilla beta released Spoke:

Spoke lets you quickly take all the amazing 3D content from across the web from sites like Sketchfab and Google Poly and compose it into a custom scene with your own personal touch. You can also use your own 3D models, exported as glTF. The scenes you create can be published, shared, and used in Hubs in just a few clicks. It takes as little as 5 minutes to create a scene and meet up with others in VR.

The social aspect is obviously key here. I tried to meet my colleague Guy who was sitting next to me in a workshop in order to check the trippy tunnel. We failed miserably. Maybe it was us. Or the user experience is not yet there. Visiting the Palace of Versailles was not that stunning either. Nevertheless it is great idea how easy it is to add additional 3D content. Curious to see how things will evolve.

The torch i hold is always a flame

I had way more fun playing around with the stunning AR app Torch – a prototyping, content creations, and collaboration platform for the AR and VR cloud. Obviously a good way to start prototyping too.

Use Cases

And the AR wayfinding demo is really cool! What else? Besides playing around with tools, we spent more time thinking about use cases. Obviously we are not the only ones. And this is where a talk by Michael Hoffmann (Object Theory) comes in handy.


There might be more, nevertheless this is a good structured approach in order to find out which scenarios might best be suited for our project.

Project Aero

And there was something else. Adobe announced the private beta of Project Aero, an augmented reality authoring tool and content delivery platform for smartphones. Project Aero will allow creatives to make augmented reality artifacts without the need to learn how to use things like the Unity game engine.

So what does that mean? Adobe Creative Cloud applications like Photoshop and Dimensions will be able to make AR artifacts from things like image, video, or PSD files become part of the physical world around you, so you can walk through the layers of a PSD file, for example. Project Aero uses the USDZ file format for Apple’s ARKit.

USDZ or glTF

USDZ has Apple, Pixar, and Adobe behind it, but glTF has Microsoft, Google, andFacebook. And, contrary to Apple’s usual MO, USDZ is actually an open format, just like glTF is. Time will tell which format will become the industry standard, but our money is on glTF.

War or Peace

Linda and Stephan do prefer peace

One more thing: We showed our Macke VR prototype in collaboration with @DeutscheWelle at the War or Peace Conference. Our @MIZBabelsberg co-funded project @xrdok will enable journalists to build such prototypes themselves. #warorpeace

Week 03

Playing around with Poly

Welcome back. This time we met via Skype to discuss next steps. Ronny has been busy starting to implement our component system.

So what does that mean? You can now look at an object in an a-frame test scene and you can grab it with a gaze or a click on your computer. And you can drop it on another object. Just some 40 lines of code, but our very first two defined components. More to come. The goal is to enable content producers to enhance objects in scenes for explaining stuff or telling stories.

Spatial Storytelling

Let me give you contextual info on why that matters. Actually “giants like Facebook, Google and Microsoft as well as startups like Sketchfab are racing to provide useful services that turn the Web 3D.” (*)

Facebook has just announced to embrace “a new type of content for you to upload alongside text, photos and videos. 3D objects and scenes saved in the industry standard glTF 2.0 format can be dragged straight to a browser window to add to your Facebook account ” (*)

Some of that is not necessarily new. Stereoscopy, a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision, has been around for ages.

But 3D models in the digital age can do way more than this nice little trick to get people excited for a couple of seconds on Facebook. The already mentioned industry standard glTF does the trick. We introduced it last week, and have seen it popping up here and there

We offer glTF as the standard export for all downloadable models in addition to the original file format that was uploaded. That’s over 100,000 models currently available for free download under Creative Commons licensing.

And that is just Sketchfab. We want to make these objects easily useable in a scene and add components. Actually you can add more components if you like or need. It will be open source.

Turn the Web 3D

People did not get the idea about scrolling. This image explains that text is not suddenly lost just because it is not seeable anymore when you scroll 

3D objects can and will change and enhance the ways in which we tell stories on the good old web. Especially when you take these objects and put them somewhere (like on to the real reality).

This can very easily be done in your mobile browser right now. For example on an iPhone

Try https://shop.magnolia.com/collections/arkit

But there is more. The iPhone X for example has introduced 3D head tracking (face tracking) using its TrueDepth camera. This allows for inferring the 3D position of the users eye. How cool is that.

Try https://itunes.apple.com/de/app/theparallaxview/id1352818700?mt=8

And again. Up until now we are just talking mobile usage on a phone. Just imagine what can be possible when you are wearing mixed reality glasses. You will be able to add 3D objects to the real reality. You might not even be able to distinguish them from the real world. Check out Mica

First things first

Okay, back to xrdok. We are aiming for easy stuff in the beginning. On a technical level Ronny will start and add external 3d models to our test scene and just fiddle around a bit with AR. Surely we want to tackle VR first. 

We already started thinking about proper test scenes that represent scenarios that are useful for content producers. We are totally up for discussion and will run several hackathons to let you play around with the tool when we are there. 

Week 01

Ronny playing around with A Frame

We just had our internal kickoff at MIZ Babelsberg. Discussing milestones, talking about the MVP (minimal viable product), events to come in the weeks and months ahead (including hackathons) and – of course – in the center of it all: we started shaping a shared product vision. 

Working at the pyramid

We agreed on building a very first scene in the next week including two objects and one action in order to derive a proper working plan.

What the heck are we actually aiming for

If you are asking yourself what this is all about, let me try to clarify. We are deeply convinced that we are slowly transitioning to a new computer
paradigm that will not necessarily be dominated by smartphones.

2018 marks the beginning of the end of traditional smartphones. During the next decade, we will start to transition to the next era of computing and connected devices, which we will wear and will command using our voices, gesture and touch. Amy Webb

VR, AR – currently we prefer the term XR – will shape the way in which we will consume content, information and stories. On mobile phones (for a start), on head mounted displays (that will look better and better and become lighter and lighter) and on contact lenses or elsewhere (maybe someday).

All this will at the same time, feel as if it takes forever and happen very very fast. Not fast enough for some (*) but it is actually happening (*) right now: New forms and technical possibilities to tell stories and to transfer knowledge.

XR Dok – What can i do

Our open source toolkit will allow content producers to combine CGI-objects with actions in spaces. For example? Imagine a 3D-model of a car that can be annotated with text and information. When you are writing an article on a specific car, just add a 3D-model, that users can look at from various perspectives. This works in VR-mode, but it could certainly work in AR-mode as well. Just park the car on your table.

XR Dok will amongst other things make use of Open Street map. And it will allow integration. You as content producer will have the chance to build a story in CGI very easily with our toolkit.

One of the major challenges in our project is to combine technical knowledge with content creation ideas for emerging media and to explain all this to journalists, content producers, friends and family.


In order to learn something new all together, we will start throwing in things that some of us learn on the way. Today: a new file format, called glTF

glTF (GL Transmission Format) is a file format for 3D scenes and models using the JSON standard. It is described by its creators as the “JPEG of 3D.”


City Tarif, Haiyti

Welcome. How are you? Welcome to XRDOK. Our new project. We are Vragments. We are building webbased VR editor Fader and we urge to add something new.

This time we want to build upon given frameworks like A-Frame in order to come up with something even more useful: An open source XR toolkit for CGI-based storytelling. Supported by MIZ Babelsberg. Thx!

We want to offer an easy solution for storytellers, journalists and others to play around with objects, rooms and interactions in VR, AR and whatever comes along.

Next on the list: Our internal kick-off event. We´ll be back next week.