The fabulous XRDOK starter kit

Before social media and before easy content management systems like WordPress (that actually runs this website that you are reading right now) you had to open an empty editor and make use of the hypertext markup language (HTML) in order to create a web page that could for instance display some text. Together with Cascading Style Sheets (CSS) and JavaScript, HTML forms a triad of cornerstone technologies for the World Wide Web.

The web will no longer be flat

Up til very recently this web was flat: text, images and videos to look at on a screen. This is currently changing. A bunch of people is working on application programming interfaces (APIs) that allow web applications to present content in virtual or mixed reality. That means you basically can enter a 3d scene in your web browser, look around and move around with the cursor just like in a computer game on a laptop or desktop pc, you can use a head mounted display too, you can augment digital objects into the real reality and much more.

Learn Your Vocabulary

All this relies on software and as soon as you start digging into the topic you are confronted with a hell lot of vocabulary like WebGL or WebVR or A-Frame or Entity Component System that can very easily frighten you off if you are not a techie. But you should not let yourself down so fast. It was easy to write a simple “Hello World!” website back in the days and it is not so very complicated to built your very own 3D experience with computer generated imagery (CGI) and animated 3d objects. Trust us. We will focus on VR experiences now.

Say ‘Hi’ to WebVR


WebVR is an open specification that makes it possible to experience VR in your browser. The goal is to make it easier for everyone to get into VR experiences, no matter what device you have. To get a first idea you can check out An Intro to WebVR a free, 5-part video course with interactive code examples that will teach you the fundamentals of WebVR using A-Frame.Two things that you need to know right now: A-Frame and Glitch.

A-Frame is an open-source web framework for building virtual reality (VR) experiences. To be precise A-Frame is an entity component system framework for Three.js where developers can create 3D and WebVR scenes using HTML. The last bit is crucial. A-Frame can be developed from a plain HTML file without having to install anything. Just like your very first website that writes “Hello World” back in the 1990s. Check out the Introduction to A-Frame here.

Glitch is an online code editor that instantly hosts and deploys for free. It looks a bit weird in the beginning, but is really useful and the place where you can try out things by remixing existing projects. And: “If you’re not a developer, don’t sweat it — you can make simple changes as easily as you edit a spreadsheet. Apps update live as you type.”

Our Glitch Tutorial

We prepared some examples for you to learn step by step. Here is our tutorial. Let me quickly show you what to do when you open 01-objects. There is a lot of stuff happening but please do not panic. It all makes sense:

1) You can open the index.html-file and you see some code. Yet again. Be reminded that “you can make simple changes as easily as you edit a spreadsheet”.

2) In order to play around with the project klick “Remix to Edit” – now you have your very own version of the file and can do whatever you want to do with it.

3) You can change the view to Code (just the code), App (just the app) or both (as you can see here)

4) Please pay attention to the grey text because these are the comments. They explain stuff.

Yet again it is useful to watch the starter kit videos mentioned above. They help you to sort things out.

Let me introduce the Visual Inspector

After you have played around with the code a little bit it is time to check out the visual inspector.  A-Frame provides a handy built-in visual 3D inspector. Open up any A-Frame scene, hit <ctrl> + <alt> + i, and fly around to peek under the hood! The A-Frame inspector is a visual tool for inspecting and tweaking scenes. With the Inspector, we can:

  • Drag, rotate, and scale entities using handles and helpers
  • Tweak an entity’s components and their properties using widgets
  • Immediately see results from changing values without having to go back and forth between code and the browser

The Time has come for XRDOK components

A-Frame is an open-source web framework. That means that the source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. A-Frame has had over 270 different contributors that add components. A component is a reusable and modular chunk of data that adds further functionality.

And here Vragments comes in with XRDOK. XRDOK is an open source XR toolkit for CGI-based content production. It lets users combine 3d objects (gltf) with actions (components) in scenes (VR, later AR) easily in order to tell stories or explain things.

A library of components for specific use cases

Our goal is to provide a library of components that provide functionality for specific use cases and are primed for easy use instead of abstraction. You can start playing around with our components xr-on and xr-click in tutorial project number 6.

xr-click: When added to an A-frame entity (a tag like a-box, a-sphere) or own A-frame components, triggers a click event when clicked.

xr-on: Trigger component updates sequentially and in parallel. Enrich the behaviour of an entity by defining components in child nodes that should be added to the entity depending on specified event condition.

Learn more about our components here.

Hackathon and more

On Monday, February 25th, 2019, 10 am we will play around with the components at   Medieninnovationszentrum Babelsberg, Stahnsdorfer Str. 107, 14482 Potsdam. If you want, just come on over. And bring your laptop. Or write a mail to Marcus if you want to learn more.

Links

Hacking Time Travel

A test scene with some decent pink floor and cars misbehaving

We are huge fans of the concept of a Hackathon. You just need a bunch of enthusiastic people, a room and some food. And then you suddenly end up building strange stuff like a room in the middle of nowhere, equipped with office furniture. Why? In order to put objects on that table that let you travel through time and space when you touch them.

One guy working and the others watching in awe

That was one of the ideas we worked on at our Hackathon, January 11th at MIZ in Babelsberg. We were a bit afraid beforehand how the participants would embrace and make use of our framework. How glad we were when we heard someone say: they are definitely on the right track here.

Easy Use

All you need is a basic understanding of HTML to start playing around with our framework that lets users combine 3d objects (gltf) with actions (components) in scenes (VR, later AR) easily in order to tell stories or explain things. XRdok allows the creation of interactive VR projects built on A-frame. Our goal: to provide a library of components that provide functionality for specific use cases and are primed for easy use instead of abstraction.

Who? What? Why? Questions you should be able to answer

But a framework is one thing. The next question is: What to build with it? After a general input we started discussing ideas and agreed on answering three questions: Who are you in a VR experience? What do you do? And why is that? With these questions answered you are good to go because you have the foundation for a learning experience.

Especially the idea to touch or combine objects in order to move in space had its charm. Imagine to virtually touch an artefact that brings you back into a 360 scenario that immerses you into the historical context. All you need to start is a 360 photo that is used to create a scene and a bunch of objects that can be found on Sketchfab or Poly.

Personal interactive experience

The other scenario we started working on was all about re-creating a scene that can be observed from different angles. Because that is a huge advantage of CGI based 3D scenes. There is a “long tradition” in VR doing stuff like that with Nonny de la Pena and her team creating the immersive docu-game “Gone Gitmo” in 2007. Just imagine to be able to do something like that on a smaller scale but all by yourself in a couple of hours or days.

Crime scenes are recreated like that by the Institute of Forensic Medicine in Zurich and there is another journalistic example that lets users walk around the Ferguson shooting scene, where Michael Brown was shot dead by a police officer on 9 August 2014 in Ferguson, Missouri. We are curious to explore more in this context of interactive knowledge transfer.

Next Hackathon

We would dearly like to thank all participants and the MIZ for a great day. And we are already planning our next Hackathon. Just drop us a line if you are interested.

XR DOK Hackathon

It is happening. We are running a very first Hackathon to play around with our framework. Come around and join us. Be among the first to test XR Dok. Wether you are a journalist, programmer, graphic designer or just generally interested in the future of interactive immersive media. We want to talk, discuss, code and prototype with you.

January 11th, 2019, 10 am. Medieninnovationszentrum Babelsberg, Stahnsdorfer Str. 107, 14482 Potsdam. Bring your laptop. We provide food and drinks. And do not forget to write a mail to Marcus in order to join. The event is free as in coffee.

XRDOK is an open source XR toolkit for CGI-based content production. It lets users combine 3d objects (gltf) with actions (components) in scenes (VR, later AR) easily in order to tell stories or explain things.

Okay, again. Here is the important stuff: January 11th, 2019, 10 am. Stahnsdorfer Str. 107, 14482 Potsdam. Write a mail if you want to come around.

And here is a test-scene on glitch. Enjoy!

Week 06

Hi! We have been busy doing some research, playing around with stuff and actually we have been trying to keep track of all the stuff that is currently happening in our field of interest.
Looks pretty straight forward
One day after our last blogpost Mozilla beta released Spoke: Spoke lets you quickly take all the amazing 3D content from across the web from sites like Sketchfab and Google Poly and compose it into a custom scene with your own personal touch. You can also use your own 3D models, exported as glTF. The scenes you create can be published, shared, and used in Hubs in just a few clicks. It takes as little as 5 minutes to create a scene and meet up with others in VR. The social aspect is obviously key here. I tried to meet my colleague Guy who was sitting next to me in a workshop in order to check the trippy tunnel. We failed miserably. Maybe it was us. Or the user experience is not yet there. Visiting the Palace of Versailles was not that stunning either. Nevertheless it is great idea how easy it is to add additional 3D content. Curious to see how things will evolve.
The torch i hold is always a flame
I had way more fun playing around with the stunning AR app Torch – a prototyping, content creations, and collaboration platform for the AR and VR cloud. Obviously a good way to start prototyping too.

Use Cases

And the AR wayfinding demo is really cool! What else? Besides playing around with tools, we spent more time thinking about use cases. Obviously we are not the only ones. And this is where a talk by Michael Hoffmann (Object Theory) comes in handy.
18
There might be more, nevertheless this is a good structured approach in order to find out which scenarios might best be suited for our project.

Project Aero

And there was something else. Adobe announced the private beta of Project Aero, an augmented reality authoring tool and content delivery platform for smartphones. Project Aero will allow creatives to make augmented reality artifacts without the need to learn how to use things like the Unity game engine. So what does that mean? Adobe Creative Cloud applications like Photoshop and Dimensions will be able to make AR artifacts from things like image, video, or PSD files become part of the physical world around you, so you can walk through the layers of a PSD file, for example. Project Aero uses the USDZ file format for Apple’s ARKit.

USDZ or glTF

USDZ has Apple, Pixar, and Adobe behind it, but glTF has Microsoft, Google, andFacebook. And, contrary to Apple’s usual MO, USDZ is actually an open format, just like glTF is. Time will tell which format will become the industry standard, but our money is on glTF.

War or Peace

Linda and Stephan do prefer peace
One more thing: We showed our Macke VR prototype in collaboration with @DeutscheWelle at the War or Peace Conference. Our @MIZBabelsberg co-funded project @xrdok will enable journalists to build such prototypes themselves. #warorpeace

Week 03

Playing around with Poly

Welcome back. This time we met via Skype to discuss next steps. Ronny has been busy starting to implement our component system.

So what does that mean? You can now look at an object in an a-frame test scene and you can grab it with a gaze or a click on your computer. And you can drop it on another object. Just some 40 lines of code, but our very first two defined components. More to come. The goal is to enable content producers to enhance objects in scenes for explaining stuff or telling stories.

Spatial Storytelling

Let me give you contextual info on why that matters. Actually “giants like Facebook, Google and Microsoft as well as startups like Sketchfab are racing to provide useful services that turn the Web 3D.” (*)

Facebook has just announced to embrace “a new type of content for you to upload alongside text, photos and videos. 3D objects and scenes saved in the industry standard glTF 2.0 format can be dragged straight to a browser window to add to your Facebook account ” (*)

Some of that is not necessarily new. Stereoscopy, a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision, has been around for ages.

But 3D models in the digital age can do way more than this nice little trick to get people excited for a couple of seconds on Facebook. The already mentioned industry standard glTF does the trick. We introduced it last week, and have seen it popping up here and there

We offer glTF as the standard export for all downloadable models in addition to the original file format that was uploaded. That’s over 100,000 models currently available for free download under Creative Commons licensing.

And that is just Sketchfab. We want to make these objects easily useable in a scene and add components. Actually you can add more components if you like or need. It will be open source.

Turn the Web 3D

People did not get the idea about scrolling. This image explains that text is not suddenly lost just because it is not seeable anymore when you scroll 

3D objects can and will change and enhance the ways in which we tell stories on the good old web. Especially when you take these objects and put them somewhere (like on to the real reality).

This can very easily be done in your mobile browser right now. For example on an iPhone

Try https://shop.magnolia.com/collections/arkit

But there is more. The iPhone X for example has introduced 3D head tracking (face tracking) using its TrueDepth camera. This allows for inferring the 3D position of the users eye. How cool is that.

Try https://itunes.apple.com/de/app/theparallaxview/id1352818700?mt=8

And again. Up until now we are just talking mobile usage on a phone. Just imagine what can be possible when you are wearing mixed reality glasses. You will be able to add 3D objects to the real reality. You might not even be able to distinguish them from the real world. Check out Mica

First things first

Okay, back to xrdok. We are aiming for easy stuff in the beginning. On a technical level Ronny will start and add external 3d models to our test scene and just fiddle around a bit with AR. Surely we want to tackle VR first. 

We already started thinking about proper test scenes that represent scenarios that are useful for content producers. We are totally up for discussion and will run several hackathons to let you play around with the tool when we are there. 

Week 01

Ronny playing around with A Frame
We just had our internal kickoff at MIZ Babelsberg. Discussing milestones, talking about the MVP (minimal viable product), events to come in the weeks and months ahead (including hackathons) and – of course – in the center of it all: we started shaping a shared product vision. 
Working at the pyramid
We agreed on building a very first scene in the next week including two objects and one action in order to derive a proper working plan. What the heck are we actually aiming for If you are asking yourself what this is all about, let me try to clarify. We are deeply convinced that we are slowly transitioning to a new computer paradigm that will not necessarily be dominated by smartphones. 2018 marks the beginning of the end of traditional smartphones. During the next decade, we will start to transition to the next era of computing and connected devices, which we will wear and will command using our voices, gesture and touch. Amy Webb VR, AR – currently we prefer the term XR – will shape the way in which we will consume content, information and stories. On mobile phones (for a start), on head mounted displays (that will look better and better and become lighter and lighter) and on contact lenses or elsewhere (maybe someday). All this will at the same time, feel as if it takes forever and happen very very fast. Not fast enough for some (*) but it is actually happening (*) right now: New forms and technical possibilities to tell stories and to transfer knowledge. XR Dok – What can i do Our open source toolkit will allow content producers to combine CGI-objects with actions in spaces. For example? Imagine a 3D-model of a car that can be annotated with text and information. When you are writing an article on a specific car, just add a 3D-model, that users can look at from various perspectives. This works in VR-mode, but it could certainly work in AR-mode as well. Just park the car on your table.
XR Dok will amongst other things make use of Open Street map. And it will allow integration. You as content producer will have the chance to build a story in CGI very easily with our toolkit.
One of the major challenges in our project is to combine technical knowledge with content creation ideas for emerging media and to explain all this to journalists, content producers, friends and family. WTF GLTF  In order to learn something new all together, we will start throwing in things that some of us learn on the way. Today: a new file format, called glTF glTF (GL Transmission Format) is a file format for 3D scenes and models using the JSON standard. It is described by its creators as the “JPEG of 3D.”

Hello!

City Tarif, Haiyti

Welcome. How are you? Welcome to XRDOK. Our new project. We are Vragments. We are building webbased VR editor Fader and we urge to add something new.

This time we want to build upon given frameworks like A-Frame in order to come up with something even more useful: An open source XR toolkit for CGI-based storytelling. Supported by MIZ Babelsberg. Thx!

We want to offer an easy solution for storytellers, journalists and others to play around with objects, rooms and interactions in VR, AR and whatever comes along.

Next on the list: Our internal kick-off event. We´ll be back next week.

\,,/(^_^)\,,/