Hello and welcome to TechSimplify, today we are going to look into couple of technologies which you might have heard of or some of you might actually even used it, in some form or another. Yes, most of you might have seen the VR Headsets. Remember Pokemon Go, the game which literally got us on feet and made us walk all around the city. Pokemon Go was possible because of Augmented Reality. We have all heard about AR (Augmented Reality) and VR (Virtual Reality), maybe even used it, but most of us (including me) don’t really know what exactly it is or how does it really work or how to create our very AR & VR applications. Well, don’t worry because we are going to learn all about AR & VR today. Let’s begin 🙂
Augmented Reality (AR)
AR is basically an enhanced version of reality where live direct or indirect view of a physical, real-world environment whose elements are “augmented” by computer-generated or extracted real-world sensory input such as sound, video, graphics, hap-tics or GPS data. In simple terms AR is a technology which allows us to integrate fictional or imaginary world (Pokemon’s World) into our real world via a Device like a smartphone, along with surrounding world information like location, time and other data about the user. For eg. While playing Pokemon Go, you would encounter water type Pokemon, when you are near a water body etc. AR may not sound that much exciting but when you think about it, the application for AR can be huge, it can be as simple as text or call notification or it can be as complex as an interactive simulation of a complex surgery.
History of AR
Ivan Sutherland can be credited with starting the field that would eventually turn into both VR and AR, an industry with net worth of $1.1 Billion in 2016 and soon to reach $150 Billion by 2020.
Soon after Ivan Sutherland, Myron Krueger developed an Artificial Reality lab known as Videoplace.
How does it work ?
Well the idea is quite simple actually but the execution is genius. Two people in different rooms, each containing a projection screen and a video camera, were able to communicate through their projected images in a shared space on the screen. No computer was involved in the first Environment in 1975.
In 1990, the term “Augmented Reality” was coined at Boeing by researcher Tom Caudell. Tom Caudell along with his colleague David Mizell, were asked to come up with an alternative to the expensive diagrams and marking devices then used to guide workers on the factory floor. They proposed replacing the large plywood boards, which contained individually designed wiring instructions for each plane, with a head-mounted apparatus that would display a plane’s specific schematics through high-tech eyeware and project them onto multipurpose, reusable boards. Instead of re configuring each plywood board manually in each step of the manufacturing process, the customized wiring instructions would essentially be worn by the worker and altered quickly and efficiently through a computer system.
In Early 90s Virtual Fixtures was developed by Louis Rosenberg. A virtual fixture is an overlay of augmented sensory information on a work-space in order to improve human performance in direct and remotely manipulated tasks. Virtual Fixture was developed at the US Air Force, virtual fixtures was a pioneering platform in virtual reality and augmented reality technologies.
In 1994 Julie Martin creates first ‘Augmented Reality Theater production’, Dancing In Cyberspace, funded by the Australia Council for the Arts, features dancers and acrobats manipulating body–sized virtual object in real-time, projected into the same physical space and performance plane. The acrobats appeared immersed within the virtual object and environments. The installation used Silicon Graphics computers and Polhemus sensing system. Dancing In Cyberspace was one of the first application of Augmented Reality in Entertainment.
In 1999 NASA X-38 spacecraft was flown using a Hybrid Synthetic Vision system that used Augmented Reality to overlay map data to provide enhanced visual navigation during flight tests.
For 2003 NFL season, Sportvision unveils the first computer graphic system capable of inserting the 1st & Ten line from popular Sky-cam, the NFL’s mobile camera that provides the field’s aerial perspective.
In 2009 ARToolkit brings augmented reality to web browsers.
In 2013 Car Manufacturers begin to use augmented reality as vehicle service manual. The Volkswagen MARTA app which stands for Mobile Augmented Reality Technical Assistance provides step by step repair assistance, allowing technicians to foresee how a repair or maintenance should be processed.
In 2014 Google announces shipment of Google Glass devices for consumers, thus setting the trend for wearable AR, which eventually leads to around $1.1 Billions investment in Augmented Reality and Virtual Reality.
By 2020, the projected investment in AR & VR is $150 Billion.
Well that is it for the history of AR. We have covered the introduction and history for Augmented reality now let’s move on to the most important section –
How Augmented Reality Works ?
Augmented reality is closer to the real world, when compared to Virtual Reality. The basic idea of augmented reality is to superimposed graphics, audio and other sensory enhancements over a real-world environment in real-time. Sounds pretty simple right? Well, it is simple but only in theory, the challenge is in implementation. Augmented reality adds graphics, sounds, hap-tic feedback and smell to the natural world as it exists. Both video games and cell phones are driving the development of augmented reality. Everyone from tourists, to soldiers, to someone looking for the closest subway stop can now benefit from the ability to place computer-generated graphics in their field of vision. Well to understand the workings of Augmented Reality, let’s dive into one of the application of AR.
In February 2009, at the TED conference, Pattie Maes and Pranav Mistry presented their augmented-reality system, which they developed as part of MIT Media Lab’s Fluid Interfaces Group. They call it SixthSense, and it relies on some basic components that are found in many augmented reality systems:
- Small projector
These components are strung together in a lanyard-like apparatus that the user wears around his neck. The user also wears four colored caps on the fingers, and these caps are used to manipulate the images that the projector emits. SixthSense is remarkable because it uses these simple, off-the-shelf components that cost around $350. It is also notable because the projector essentially turns any surface into an interactive screen. Essentially, the device works by using the camera and mirror to examine the surrounding world, feeding that image to the phone (which processes the image, gathers GPS coordinates and pulls data from the Internet), and then projecting information from the projector onto the surface in front of the user, whether it’s a wrist, a wall, or even a person. Because the user is wearing the camera on his chest, SixthSense will augment whatever he looks at; for example, if he picks up a can of soup in a grocery store, SixthSense can find and project onto the soup information about its ingredients, price, nutritional value — even customer reviews. Well isn’t it great, one of the greatest application of AR is build using components which pretty much every has access to. Well if you are interested in making your very own prototype, check out the instructions for building your own prototype.
Virtual Reality (VR)
What is Virtual Reality(VR), well technically Virtual Reality is the term used to describe a three-dimensional, computer generated environment which can be explored and interacted with by a person. That person becomes part of this virtual world or is immersed within this environment and whilst there, is able to manipulate objects or perform a series of actions. In other words, Virtual Reality comes, naturally, from the definitions for both ‘virtual’ and ‘reality’. The definition of ‘virtual’ is near and reality is what we experience as human beings. So the term ‘virtual reality’ basically means ‘near-reality’. This could, of course, mean anything but it usually refers to a specific type of reality emulation. I hope that was much easier to understand.
History of VR
1838 – Stereoscopic photos & viewers
In 1838 Charles Wheatstone’s research demonstrated that the brain processes the different two-dimensional images from each eye into a single object of three dimensions. Viewing two side by side stereoscopic images or photos through a stereoscope gave the user a sense of depth and immersion. The later development of the popular View-Master stereoscope (patented 1939), was used for “virtual tourism”.
1929 – Link Trainer The First Flight Simulator
In 1929 Edward Link created the “Link trainer” (patented 1931) probably the first example of a commercial flight simulator, which was entirely electromechanical. It was controlled by motors that linked to the rudder and steering column to modify the pitch and roll. A small motor-driven device mimicked turbulence and disturbances.
1950s – Morton Heilig’s Sensorama
In the mid 1950s cinematographer Morton Heilig developed the Sensorama (patented 1962) which was an arcade-style theater cabinet that would stimulate all the senses, not just sight and sound. It featured stereo speakers, a stereoscopic 3D display, fans, smell generators and a vibrating chair. The Sensorama was intended to fully immerse the individual in the film. Click here to watch the interview with Morton about the Sensorama.
1968 – Sword of Damocles
In 1968 Ivan Sutherland and his student Bob Sproull created the first VR / AR head mounted display (Sword of Damocles) that was connected to a computer and not a camera. It was a large and scary looking contraption that was too heavy for any user to comfortably wear and was suspended from the ceiling (hence its name). The user would also need to be strapped into the device. The computer generated graphics were very primitive wire frame rooms and objects.
1987 – Virtual reality the name was born
Even after all of this development in virtual reality, there still wasn’t an all-encompassing term to describe the field. This all changed in 1987 when Jaron Lanier, founder of the visual programming lab (VPL), coined the term “virtual reality”. The research area now had a name. Through his company VPL research Jaron developed a range of virtual reality gear including the Dataglove and the EyePhone head mounted display.
1993 – SEGA announce new VR glasses
Sega announced the Sega VR headset for the Sega Genesis console in 1993 at the Consumer Electronics Show in 1993. The wrap-around prototype glasses had head tracking, stereo sound and LCD screens in the visor. However, technical development difficulties meant that the device would forever remain in the prototype phase despite having developed 4 games for this product. This was a huge flop for Sega.
Virtual reality in the 21st century
The first seventeen years of the 21st century has seen major, rapid advancement in the development of virtual reality. Computer technology, especially small and powerful mobile technologies, have exploded while prices are constantly driven down. The rise of smartphones with high-density displays and 3D graphics capabilities has enabled a generation of lightweight and practical virtual reality devices. The video game industry has continued to drive the development of consumer virtual reality unabated. Depth sensing cameras sensor suites, motion controllers and natural human interfaces are already a part of daily human computing tasks.
How Virtual Reality works ?
The main challenge with VR is a way to display images to a user. Many systems use HMDs, which are headsets that contain two monitors, one for each eye. The images create a stereoscopic effect, giving the illusion of depth. Early HMDs used cathode ray tube (CRT) monitors, which were bulky but provided good resolution and quality, or liquid crystal display (LCD) monitors, which were much cheaper but were unable to compete with the quality of CRT displays. Today, LCD displays are much more advanced, with improved resolution and color saturation, and have become more common than CRT monitors.
Other VE systems project images on the walls, floor and ceiling of a room and are called Cave Automatic Virtual Environments (CAVE). The University of Illinois-Chicago designed the first CAVE display, using a rear projection technique to display images on the walls, floor and ceiling of a small room. Users can move around in a CAVE display, wearing special glasses to complete the illusion of moving through a virtual environment. CAVE displays give users a much wider field of view, which helps in immersion. They also allow a group of people to share the experience at the same time (though the display would track only one user’s point of view, meaning others in the room would be passive observers). CAVE displays are very expensive and require more space than other systems.
Input devices are also important in VR systems. Currently, input devices range from controllers with two or three buttons to electronic gloves and voice recognition software. There is no standard control system across the discipline. VR scientists and engineers are continuously exploring ways to make user input as natural as possible to increase the sense of telepresence. Some of the more common forms of input devices are:
- Force balls/tracking balls
- Controller wands
- Voice recognition
- Motion trackers/bodysuits
Mixed reality (MR)
Mixed reality (MR), sometimes referred to as hybrid reality. Mixed Reality is basically merging of real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real-time. Mixed Reality is the next evolution in human, computer, and environment interaction and unlocks possibilities that before now were restricted to our imaginations. It is made possible by advancements in computer vision, graphical processing power, display technology, and input systems. The term mixed reality was originally introduced in a 1994 paper by Paul Milgram and Fumio Kishino, “A Taxonomy of Mixed Reality Visual Displays.” Their paper introduced the concept of the virtuality continuum and focused on how the categorization of taxonomy applied to displays. Since then, the application of mixed reality goes beyond displays but also includes environmental input, spatial sound, and location.
The above Venn diagram should clarify any doubts regarding Mixed Reality. To see Mixed Reality in action click here. The video is created by Microsoft, demonstrating Mixed Reality.
Applications of Mixed Reality (MR)
MR has found its way into a number of applications, evident in the arts and entertainment industries. However, MR is also branching out into the business, manufacturing and education worlds with systems such as these –
- IPCM – Interactive Product Content Management
- SBL – Simulation Based Learning
- Military Training
- Real Asset Virtualization Environment (RAVE)
- Remote working
SDK For Augmented Reality
1. Kudan AR
Kudan AR SDK is chosen by professional developers looking for an all-in-one SDK that can support marker or marker less tracking and location requirements. Its fast and light and ready to be ported into any platforms with any peripherals as well. Kudan AR is the only SDK engine available that supports both marker and marker less tracking. KudanCV engine is written in C++ and has architecture specific optimizations written in assembly to give the fastest and most robust performance with the minimum memory footprint. The AR SDKs have native platform APIs, such as ObjectiveC for iOS and Java for Android. A Unity cross-platform game engine is also available. Click here to download the demo. Click here to download the native Kudan AR SDKs and Unity Plugin. Check out the pricing model for Kudan AR.
Vuforia is an Augmented Reality Software Development Kit (SDK) for mobile devices that enables the creation of Augmented Reality applications. It uses Computer Vision technology to recognize and track planar images (Image Targets) and simple 3D objects, such as boxes, in real-time. This image registration capability enables developers to position and orient virtual objects, such as 3D models and other media, in relation to real world images when these are viewed through the camera of a mobile device. The virtual object then tracks the position and orientation of the image in real-time so that the viewer’s perspective on the object corresponds with their perspective on the Image Target, so that it appears that the virtual object is a part of the real world scene.
Vuforia SDK supports a variety of 2D and 3D target types including ‘marker less’ Image Targets, 3D Multi-Target configurations, and a form of addressable Fiducial Marker known as a VuMark. Additional features of the SDK include localized Occlusion Detection using ‘Virtual Buttons’, runtime image target selection, and the ability to create and reconfigure target sets programmatically at runtime. Vuforia supports multiple platforms, to download SDK’s click here. You can use Vuforia for free to develop and test your application. Whether you are working on the next big game or on a breakthrough solution for your business, we will give you all the tools you need to succeed. Community support is available through the Vuforia developer forums. For more options about pricing, click here.
OBJECT RECOGNITION — Object recognition technology creates an additional touch point to interact with users, allowing real-time and 360 degrees AR experiences around real world objects. Build augmented reality experiences using a variety of real-world objects.
INSTANT TRACKING — Ditch the markers! Instant Tracking is the first feature using Wikitude’s SLAM technology. Easily map environments and display AR content without the need of a target image (marker less). This feature works in both indoors and outdoors environments.
MULTIPLE IMAGE TARGET — This feature enables recognition of several images simultaneously. Once the images are recognized, developers will be able to layer 3D models, buttons, videos, images and more on each target. Additionally, augmentations will be able to interact with each other based on the targets’ positions. Multiple image target recognition can be used to bring interactivity to many apps.
CLOUD RECOGNITION — Wikitude’s Cloud Recognition service allows developers to work with thousands of target images hosted in the cloud. Wikitude’s technology is a scalable solution with very quick response time and high recognition rate. Includes 1,000,000 scan calls to the cloud service per month in each. Dedicated server options and custom offering available for enterprises.
3D AUGMENTATIONS — The Wikitude SDK can load and render 3D models in the augmented reality scene. Import your 3D model from your favorite tool like Autodesk® Maya® 3D or Blender. Every 3D model Based on the new Native API, Wikitude offers a plugin for Unity3D so you can integrate Wikitude’s computer vision engine into a game or application fully based on Unity3D.
ARToolKit is software that lets programmers easily develop Augmented Reality applications. Augmented Reality (AR) is the embedding of computer generated content into the natural environment, and has many potential applications in entertainment, media, advertising, industry, and academic research. The source code for this project is hosted on GitHub and the compiled SDKs for all platforms (Mac OS X, PC, Linux, Android, iOS), along with the ARToolKit plug-in for Unity3D, are available at Downloads page of official website.
- Robust Tracking, including Natural Feature Tracking
- Strong Camera Calibration Support
- Simultaneous tracking and Stereo Camera Support
- Multiple Languages Supported
- Optimized for Mobile Devices
- Full Unity3D and OpenSceneGraph Support
EasyAR is an Augmented Reality Engine. It is easy to use and free. EasyAR supports AR based on planar target, supports smooth load and recognition for more than 1000 local targets, supports video playback based on HW codecs, supports transparent video and streaming video, supports QR code recognition, supports tracking multi-target simultaneously. EasyAR can be used in both PC and mobile platforms. EasyAR do not show watermarks, and have no limitation of recognition times. When you have got EasyAR package or EasyAR samples, you would need a key. Make sure to read below steps before you start to use EasyAR. Registration is required to use EasyAR. Register at www.easyar.com or www.easyar.cn using your email address. Registration is free. To download SDKs click here.
SDKs For Virtual Reality
1. A-Frame (VR)
A-Frame is an open-source web framework for building virtual reality (VR) experiences. It is primarily maintained by Mozilla and the WebVR community. It is an entity component system framework for Three.js where developers can create 3D and WebVR scenes using HTML. HTML provides a familiar authoring tool for web developers and designers while incorporating a popular game development pattern used by engines such as Unity. A-Frame is based on top of HTML, making it simple to get started. But A-Frame is not just a 3D scene graph or a markup language; the core is a powerful entity-component framework that provides a declarative, extensible, and composable structure to three.js.
2. Tilt Brush
Tilt Brush is a room scale 3D painting virtual reality application developed and published by Google. The software was released for Microsoft Windows on April 5, 2016. The application is designed for motion interfaces in virtual reality but also works with keyboard and mouse. Players can export their images as animated GIF. Google acquired Skillman & Hackett (and their program) in mid to late 2015. Tilt Brush was released at the HTC Vive’s launch on April 5, 2016, at no cost when pre-ordering the HTC Vive. On February 24, 2017, Tilt VR announced it is now available on both Oculus Rift and Vive.
JanusVR is a corporation based in San Mateo, California, and Toronto, Ontario, that develops immersive web browsing software. It was founded by James McCrae and Karan Singh in December 2014. Named after Janus, the Roman God of passages, JanusVR portrays web content in multi-dimensional spaces interconnected by portals. The JanusVR platform comprises a suite of software that make it simple to create, share and experience spatially rich internet content. The suite includes:
janusvr — a standalone web authoring and browsing tool for creating spatially rich web content.
web.janusvr — a webGL-based version of JanusVR, viewable through existing web browsers, with support for mobile VR hardware.
presence.janusvr — open-source server software, forming the social and collaborative foundation of JanusVR.
export.janusvr — comprise tools to export content from popular modeling, animation and gaming software like Unity, Unreal, Blender, Maya and Sketch-up into JanusVR.
vesta.janusvr — a free web-hosting and content-sharing community integrated with JanusVR.
Download JanusVR from official website.
4. Google Jump
Jump is Google’s professional VR video solution. Jump makes 3D-360 video production at scale possible with best-in-class automated stitching. Jump cameras are designed to work with the Jump Assembler to enable seamless VR video production. Jump cameras are built for automated stitching. Precise geometry, advanced computer vision, and a lot of computing power creates 3D-360 videos. Jump Assembler stitches beautiful, seamless panoramas. Exposure control and tone mapping makes sure it feels like one video. Assembled videos are high-resolution and come with depth data, ready to go for edits and visual effects. Jump creates 3D-360 videos that allow you to perceive depth in every direction. Stereo ensures that near things look near, far things look far.
Amazon Sumerian lets you create and run virtual reality (VR), augmented reality (AR), and 3D applications quickly and easily without requiring any specialized programming or 3D graphics expertise. With Sumerian, you can build highly immersive and interactive scenes that run on popular hardware such as Oculus Rift, HTC Vive, and iOS mobile devices (support for Android ARCore coming soon). For example, you can build a virtual classroom that lets you train new employees around the world, or you can build a virtual environment that enables people to tour a building remotely. Sumerian makes it easy to create all the building blocks needed to build highly immersive and interactive 3D experiences including adding objects (e.g. characters, furniture, and landscape), and designing, animating, and scripting environments. Sumerian does not require specialized expertise and you can design scenes directly from your browser.Sumerian is still in preview, if you have an AWS account, you can sign-up for preview.
That’s all for today folks. I hope you enjoyed today’s topic, it is different and very interesting. I know most of you guys already knew about AR & VR, but after reading this post, I’m sure you must have learned something new and that’s what TechSimplify is all about. Thank you and see you all very soon. Take care 🙂