(Augment physical objects with analytics from MicroStrategy. Explore how a Unity platform-based iPad app combined with MicroStrategy REST APIs recognizes a 3D object, calls MicroStrategy datasets, anchors animated analytics to the object and enables virtual interactivity for drill-downs.)
Why AR + Analytics?
Over the past year we've been keenly interested in use cases that combine analytics and the real world. While the technology has been evolving, the development of impactful use cases has been stalled. This is especially true for consumer apps (well, except gaming of course). For example “Scanner” apps in retail are really object recognition apps that, much to the chagrin of brick-and-mortar stores, are used for comparison shopping. Lately we have seen growing interest from our clients for business apps:
An insurance adjuster’s app that overlays auto parts (and costs) on a damaged section, to help with faster and more accurate estimation
An “x-ray” app to view what’s inside (and the price) a shipped good
An app to overlay 3D anatomical data (from an MRI, for example) on a real patient body
The embedded analytics for these apps should visualize relevant, contextual, and object-anchored data such that provide recommendations and encourages interaction (gamify anyone?).
As enthusiastic proponents of agile, we before we set about building the actual app, we wanted to prove out the tech stack, and of course demo.
We picked Unity to build our solution. Unity is a game engine and has been a very popular platform for building two- and three-dimensional video games and simulations on various devices.
It has a powerful IDE that supports 3D graphics and WYSIWYG editing functionality and the ability to write scripts using C#. For our application, in addition to using Unity's graphics and animation capabilities, we needed to add sophisticated computer-vision capabilities to allow us to detect images and 3D objects and interact with spaces in the real world. For this we used Vuforia, which is a platform for creating AR apps.
Jump to Video
The third piece for our app was connecting to an analytics repository and retrieving object data and recommendations. We chose MicroStrategy because of the new REST capabilities. In the past we have leveraged MicroStrategy's Task Infrastructure to build custom APIs where we design the JSON response to our liking, but that’s a lot of wheel reinventing!
We chose to deploy the app as a dedicated RESTful application for our APIs instead of being dependent on using the plugin framework for MicroStrategy Web.
The entire product catalog and enriched attributes are stored in the database accessed by MicroStrategy. While we were not working with a lot of data, we wanted to minimize the number of API calls and improve the overall performance of the app, so we designed a cube, pointed a report to it, and defined drill paths. The relevant schema and reporting objects were then exposed through a set of REST APIs.
How It Comes Together
When the app starts scanning for objects the Vuforia engine uses sophisticated computer-vision algorithms to detect the object and then makes a REST API call to MicroStrategy for that specific object.
Once the data is retrieved from MicroStrategy we populate an animated scorecard with that data, and set up predefined drill paths on this scorecard.
The scorecard is anchored to the real world object, meaning that it maintain its position and aspect ratio relative to the object, so the user perceives the scorecard to be “augmented” to the object. “Drill” buttons allow the user to interact with the scorecard for recommendations or further information about the object.
We are developing a fully configurable plugin that can be used to spin up AR apps in Unity. Configuration parameters will include MicroStrategy server information, cubes, security, and authentication. The idea is to create a platform that allows our consultants to build AR apps using Unity, Vuforia and the MicroStrategy plugin in a low-code environment.
We also plan to test the object detection and image recognition feature for a wider variety of object types. So far we have done a fair bit of testing with 3D objects that rely on the use of planar images. Next step is to test for objects with more complex geometries.