A-Frame: A web framework for building Virtual Content
Andrew Solis | TACC
Brian McCann Ph.D. | TACC
- I've taken these notes from the a-frame presentation, however I've augmented it as a lot of what they describe can be used for other implementations which I'll go over
- Explain to folks the growing area of different realities and how they can be used in a research environment
- Introduction into A-frame, what it is and how it can be used
- End with workshop where users will be able to build their own A-frame scene
- If you would like to see the originial slides please check out my github
- There have been recent advancements to deliver virtual experiences and content to the general public in the past decade
- Companies are promoting new technologies to deliver these experiences such as with the Oculus, HTC VIVE, Microsoft Hololens, etc.
- Companies are also investing into creating applications and SDK packages as a means for users to create their own virtual experiences for any device ex. Apple, Google, Unity3D
- While their is a large presence of this technology in the video game industry, research has long used these technologies
- Virtual Reality has been used for medical training of surgeons to be better prepared for real life surgeries
- VR has also been used as a form of therapy for PTSD for veterans and to reduce a persons phobia
- CAVE's have been used since the early 90's as a form of total immersion of a user and has been used for simulations, investigating Scientific Visualization, and Human Data Interaction analysis
- It is only natural that researchers continue to use these emerging technologies to see if and how they can improve their research. But what are these technologies really?
- Does anybody know the difference between Virtual, Augmented, and Mixed Reality?
- These systems are defined by research papers, industry phrasing, and individual interpretation so no clear definition, but there is a general understanding of each other and how they relate
- Let's take a look at how each of these is defined
- Virtual reality is a technology platform that transports the viewer to immersive 3D environments
- users can interact both with the environment and content
- Can involve using a head-mounted display
- Examples include: HTC VIVE, Oculus, Google Daydream/Cardboard
- Augmented Reality overlays digital information over real world elements
- Information is provided for real world elements to the viewer, which can be as simple as text or complicated as a simulation
- The information provided to the user does not emulate real world objects
- Noticeable examples of hardware and techology include HUD (FPS video game), Google Glass
- Usecases are field workers such as folks in construction of engineering relaying information back based off what the system is viewing
- Mixed reality bring together real world and digital elements
- mixed Reality merges virtual elements into the physical world to look as if they are really there, and interact in real time
- Notable examples: Microsoft Hololens, Magic Leap, Meta2
- Range from cheap to expensive, tethered and untethered, controllers, tracking
- Different devices for different purposes, some overlap on use cases
- CAVE2 was developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) (think CAVE but with monitors instead of projections)
- This also doesn't include custom made systems made by industry or research organizations
- There are many possible ways for you to create virtual content
- Unity and Unreal are some of the biggest players in the business
- Vuforia used with unity to create AR applications easy
- Popular AR SDK's available by Google and Apple
- A-frame popular for web developers, though you don't have to be one
- Used in Research realm as well. Kitware has added a way to visualize data in vtk with VR headsets, and created builds of Paraview to do this
- UCSF (Calif. San Fran.) developed ChimeraX for interactive molecular visualization tools
- My goal is not to say which one is best, but rather show you how easy it is to create a virtual experience using a technology I am familiar with
- Depending on what you are doing can affect your choices for software and systems.
* What is my data? (triangles, mesh, point) * how big is my data? * Is there current support for my data? * How can this help tell a story? * How can it help others find new discoveries?
- While the ability for users to create virtual content is more available there is much that is needed to support the notion that a platform like this supports their purposes
A web framework for building virtual content and experiences
- Launched in December 2015
Goals of A-Frame:
- Easy for web developers to create VR content, without graphics knowledge
- Prototype and experiment WebVR and VR UX faster
- Vehicle to kickstart WebVR ecosystem
- Started as a technology for VR experiences but now changing to support all types of virtual content
<!DOCTYPE html>
<html>
<body>
<h1>My First Heading</h1>
<p>My first paragraph.</p>
</body>
</html>Standard markup language where you embed elements inside of each other to display
- Example: header tag, main html tag, a header and a paragraph How are we able to immeiately interact with virtual interfaces?
Browser APIs that enable WebGL rendering to headsets and access to VR sensors
API : A set of routines, protocols, and tools for building software applications
WebGL:
- A JavaScript API for rendering interactive 3D and 2D graphics within any compatible web browser
API:
- Optimized rendering path to headsets
- Access position and rotation (pose) data
History:
- Initial WebVR API by Mozilla
- Working W3C community group
Core technologies used for A-Frame
- We won't go into these in detail and will mostly be learning A-Frame, but this helps give you a sense of how it works and what it takes care of
- If you find yourself interested later on then you have the option to customize any level of the pipeline as you wish Let's take a look at some exammples.
<html>
<script src="https://aframe.io/releases/0.8.2/aframe.min.js"></script>
<a-scene>
</a-scene>
</html>- Just HTML
- Drop a script tag, no build steps
- Using Custom HTML Elements
- One line of HTML
<a-scene>handles- canvas, camera, renderer, lights, controls, render loop, WebVR polyfill, VREffect
- Take for granted the fact that it handles all of these options for you which you can customize later
- Put stuff inside our scene...
<html>
<script src="https://aframe.io/releases/0.8.2/aframe.min.js"></script>
<a-scene>
<a-box color="#4CC3D9" position="-1 0.5 -3" rotation="0 45 0"></a-box>
<a-cylinder color="#FFC65D" position="1 0.75 -3" radius="0.5" height="1.5"></a-cylinder>
<a-sphere color="#EF2D5E" position="0 1.25 -5" radius="1.25"></a-sphere>
<a-plane color="#7BC8A4" position="0 0 -4" rotation="-90 0 0" width="4" height="4"></a-plane>
<a-sky color="#ECECEC"></a-sky>
</a-scene>
</html>- Basic 3D primitives with Custom Elements
- Quickly look at a our example...
Simple scene that is able to be ran on mobile phone and headset if available - Try going to aframe website at link above and see for yourself the same scene ---
- Based on HTML, compatible with all existing libraries/frameworks
- Good reason to have HTML as an intermediary layer between WebGL/three.js
- All tools were on top of the notion of HTML
- Under the hood, A-Frame is an extensible, declarative framework for three.js...
- Let's take a peek at how A-frame works under the hood....
- Is an entity-component framework
- Popular in game development
- All objects in scene are entities that inherently empty objects. Plug in components to attach appearance / behavior / functionality
- 2D web, elements laid out have fixed behavior
- 3D/VR objects of infinite types and complexities, so this is an easy way to build up different kinds of objects
<a-entity>
</a-entity>- Start with an
<a-entity> - By itself, has no appearance, behavior, functionality
- Plug in components to add appearance, behavior, functionality
<a-entity
geometry="primitive: cylinder; radius: 1.5; height: 2;"
material="color: #B96FD3; roughness: 0.4">
</a-entity>- Syntax similar to CSS styles
- Component names as HTML attributes
- Component properties and values as HTML attribute value
- If you haven't used CSS or HTML then you can think about it as defining parameters of an object. Here we are defining the geometry of our entity, and it's material
- Giving it a color in hexadecimal format but can also use rgb
- roughness is used to define how the material scatters light. A rougher material (1.0) will reflect light in more directions
- units in a-frame are in meters
<a-entity
geometry="primitive: cylinder; height: 3; radius: 1.5;"
material="color: #B96FD3; roughness: 0.4"
position="-1 2 -6" rotation="45 0 90" scale="2 2 2">
</a-entity>Can add more specifications such as the position of our element in our scene, if we'd like to rotate it and about which axes, and if we'd like to scale it
- All of these attributes are mapped to x, y, and z coordinates respectively
<a-entity
geometry="primitive: cylinder; height: 2; radius: 1.5"
material="color: #B96FD3; roughness: 0.4"
position="-1 2 -6" rotation="45 0 90" scale="2 2 2"
animation="property: rotation; loop: true; to: 45 360 90">- Here I am telling the object to perform an animation (which is rotate in this example) and setting other possible paraters such as if I'd like it to loop, what the final value should be after rotation, etc.
- You can control a lot of different elements of an object and add multiple animations, but this is a simple way of understanding how to create simple actions for your events
What if I wanted to load a model from a file?
<a-assets>
<a-asset-item id="bb8-json" src="../materials/bb-unit-threejs/bb-unit.json"></a-asset-item>
</a-assets>
<a-entity
object-model= "src: #bb8-json"
position="0 1 -3" rotation="0 0 0" scale="0.01 0.01 0.01">
</a-entity>- I have a predefined object in a json file that I define inside an a-asset-item tag (you can use this to keep track of all the assets in your scene)
- I reference my object file later to load, and change some parameters of it
What about having my object perform some action? How could I write my own component?
AFRAME.registerComponent('my-component', {
schema: {
foo: {type: 'selector'},
bar: {default: 256}
},
init: function () { // ... },
update: function () { // ... },
remove: function () { // ... },
tick: function () { // ... }
});<a-box my-component="foo: #box; bar: 300"></a-box>- 'my-component': name of attribute to use to attach to a-frame entity
schema: defines how data is parsed from HTML- Lifecycle methods:
init: component attached, likecomponentDidMountupdate: component data update, likecomponentWillReceivePropsremove: component detached, likecomponentWillUnmounttick: run on every frame
- These are some components that ship with A-Frame
- A-Frame is fully extensible at its core so...
- Community has filled the ecosystem with tons of components
- Components can do whatever they want, have full access to three.js and Web APIs
- The component ecosystem the lifeblood of A-Frame
- Physics, leap motion, particle systems, audio visualizations, oceans
- Drop these components as script tags and use them straight from HTML
- Advanced developers empowering other developers
- Working on collecting these components...
Curated collection of A-Frame components.
- Collecting them into the A-Frame registry
- Like a store of components that we make sure work well
- People can browse and search for components or install them....
Visual tool for A-Frame. Just <ctrl>+<alt>+i.
@datatitian
@sleighdogs
@drryanjames
@mozillavr
The Washington Post
@jerome_etienne
- Open source and inclusive project
- Most work done on GitHub
- Active community on Slack to share projects, interact, hang out, seek help
- Featured projects on the
awesome-aframerepository and A Week of A-Frame blog
#References
- Ngo, Kevin, and Marcos Diego. "A-Frame Presentation" _aframe.io_ Mozilla Corporation. n.d. Web. 15 June 2017. .
- Vaughn, Matthew, Ph.D. "A-frame Presentation." _Heroku_. TACC, n.d. Web. 15 June 2017. .
- Jones, Brandon. "WebVR Explained". _Github_ W3C. n.d. Web. 15 June 2017. .
- Tutorialspoint.com "WebGl Cube Rotation." _Tutorialspoint_ N.p., n.d. Web. 15 June 2017. .
- Petitcolas, Jonathan. "Create a Rotating Cube in WebGL with Three.js." _jonathan-petitcolas_ N.p., n.d. Web. 16 June 2017. .
- Ngo, Kevin, and Po-chiang Chao. "An Interactive Course for WebVR." _aframe.io_ Mozilla Corporation, n.d. Web. 16 June 2017. .
- "A-frame Documentation." _aframe.io_ Mozilla Corporation, n.d. Web. 16 June 2017. .
- "Getting Started with HTML." _developer.mozilla.org_ Mozilla Corporation, n.d. Web. 16 June 2017. .
- Anatomy-of-an-html-element. Digital image. _mdn.mozillademos.org_ Mozilla Corporation, n.d. Web. 16 June 2017. .
- Grumpy-cat-small. Digital image. _mdn.mozillademos.org_ Mozilla Corporation, n.d. Web. 16 June 2017.
- Jones, Brandon. "WebVR Explained". _Github_ W3C. n.d. Web. 15 June 2017. .
- Larsen, Christian R and Soerensen, Jette L and Grantcharov, Teodor P and Dalsgaard, Torur and Schouenborg, Lars and Ottosen, Christian and Schroeder, Torben V and Ottesen, Bent S. "Effect of virtual reality training on laparoscopic surgery: randomised controlled trial" _BMJ_ BMJ Publishing Group Ltd. **10.1136/bmj.b1802**
- HOFFMAN, HUNTER G. “VIRTUAL-REALITY THERAPY.” _Scientific American_, vol. 291, no. 2, 2004, pp. 58–65. _JSTOR_, JSTOR
#References (continued)
- Thomas D. Parsons, Albert A. Rizzo. "Affective outcomes of virtual reality exposure therapy for anxiety and specific phobias: A meta-analysis" _Journal of Behavior Therapy and Experimental Psychiatry_, vol 39, Issue 3, 2008, pp. 250-261.
- Carolina Cruz-Neira, Daniel J. Sandin, and Thomas A. DeFanti. 1993. "Surround-screen projection-based virtual reality: the design and implementation of the CAVE." _In Proceedings of the 20th annual conference on Computer graphics and interactive techniques_ (SIGGRAPH '93). ACM, New York, NY, USA, 135-142. DOI:
- "Demystifying the Virtual Reality Landscape" _intel.com_ Intel. Web. July 24, 2018.
- Johnson, E. "What are the differences among virtual, augmented and mixed reality?" _recode.net_ Recode. Web July 24, 2018.