Augmented Reality has come a long way from a science fiction concept to a more science-based realism. Recently there has been a cliff-drop in the cost of development of Augmented Reality and today AR is even available on a mobile phone.
Both Apple and Google have been investing heavily in Augmented Reality, evident with the development of ARkit (by Apple) and ARCore (by Google). These ground-breaking technologies have brought the power of AR to the hands of individual developers which was not conceivable a few years ago.
We will be focusing on ARCore, which is a platform by Google to create Augmented Reality experiences without needing to have intense knowledge of OpenGL, rendering etc.
What is ARCore?
ARCore is a software development kit developed by Google that allows for augmented reality applications to be built.
ARCore uses three key technologies to integrate virtual content with the real environment:
- Motion Tracking:it allows the phone to understand its position relative to the world.
- Environmental understanding:It allows the phone to detect the size and location of all type of surfaces, vertical, horizontal and angled.
- Light Estimation:it allows the phone to estimate the environment’s current lighting conditions.
- Scene form: Sceneform makes it straightforward to render realistic 3D scenes in AR and non-AR apps, without having to learn OpenGL. It includes a high-level scene graph API. A realistic physically based renderer provided by Filament.
Who all can use ARCore?
Right now, the ARCore SDK is available for:
- Android NDK
- Unity for Android
- Unity for iOS
Augmented Reality vocabulary you need to know:
There are five main concepts to understand before diving into the details.
- FEATURE POINTS
When you move around with your device, ARCore uses the camera to detect “visually distinct features” in each captured image. ARCore uses these points in combination with the device sensors to figure out your location in the space and estimate your pose.
ARCore uses pose to refer to the position and orientation of the camera. It needs to align the pose of the virtual camera with the pose of your device’s camera so that virtual objects are rendered from the correct perspective. This allows you to place a virtual object on top of a plane, circle around it and watch it from behind.
When processing the camera input stream, apart from feature points ARCore also looks for horizontal surfaces, like tables, desks or the floor. Those detected surfaces are called planes. We’ll see how you can use these planesto anchor virtual objects to the scene.
An anchor describes a fixed location and orientation in the real world. By attaching the virtual object to an anchor, we ensure ARCore tracks the object’s position and orientation correctly over time.
- HIT TEST
When the user taps on the device’s screen ARCore runs a hit test from that (x,y) coordinate. Imagine a light ray coming from the point your finger touched and going straight into the camera’s view. ARCore will return any planes or feature points intersected by this ray, plus the pose of that intersection. The result of a hit test is a paired collection of planes & poses that we can use to create anchors and attach virtual objects to the world.