top of page
pavel874

DEVA: A New Augmented Reality App To Visualise Air Quality

We previously discussed how the large and growing field of Augmented Reality (AR) has relatively few applications when it comes to visualising air pollution in AR. Our contribution to this niche field is DEVA, the Dynamic Exposure Visualisation App. DEVA is a mobile app which ingests data from multiple sources, including citizen science sensors, and presents it in an intuitive way so that users can have quality insight. With DEVA citizens get a clearer picture of air quality in their immediate surroundings and are able to make more informed decisions when choosing a particular route to get to school, work, home etc. In this article we set out DEVA’s AR environment, focusing on 3D visualisation aspects and concepts of near-n-far data.


Augmented Reality Dynamic Exposure Visualisation App

As presented in our state-of-the-art review, there are only a handful of AR applications that visualise air pollution. The basic visual concept of those examples is to augment the scene with graphical elements, such as rocks or balls of different amounts and sizes, depending on the level of pollution. It is obvious that there is a need to go far beyond and develop new means of visualisations.


3D visualisation of sensor data

The current state of the art in AR visualisations when it comes to air pollution relies on simple approaches e.g. visualising pollutants as flying balls or rocks. For DEVA, we aim to exploit a wider range of visualisation capabilities provided by current render engines such as Unity. Complex graphical elements such as clouds or specific shaders can be used for this purpose.


In DEVA we experiment with two kinds of data representations. A simple representation that uses common graphics (primitives) and attributes, and a more complex one that leverages mobile devices, Graphics Processing Unit (GPU) to produce advanced effects and particles.


Simple representation

The simple representation uses simple 3D models (meshes) to mark the location of the pollutant/measurement in the 3D see-through screen (see Figure 1). These models can be easily extended as they use static mesh data such as:

  • Sphere

  • Cube

  • Tetrahedral

  • Cloud geometry

  • Icons

  • Pictograms representing a cloud, a pin or a device


Augmented reality visualisation of air pollution in DEVA

Figure 1: Examples of different visualisations of sensor data: (f.l.t.r.) clouds or point clouds, text or values, 3D pins


These properties can be easily distinguished as follows:

  • For each kind of pollutant a specific colour is defined

  • For the measured amount of pollution, the size of the graphical element changes respectively

  • To increase the 3D location of elements, the renderer can activate transparencies so that far-away objects disappear in the distance.


Extended representation

The more complex extended representation uses the full range of possibilities of the GPU of the device using a shader-based rendering. Unity uses three techniques: graphic shaders, visual effect (graph) shaders and compute shaders. Such effects are used more in games than traditional software and allow for more spectacular real-time 3D constructs. In combination with the particle system of the engine, also integrated in the shader system, the user gets a good control of the visualisation. All this increases the attractive and fun aspect of the app, especially for kids. DEVA converts the location and the value of measurements in special array lists to implement the extended representation to be rendered as:

  • Particle clouds (with or without animation)

  • Complex shapes (single pollutant values, aggregations, groups etc.)

  • Artistic formation


The properties depend on the shape and effect complexities:

  • Colour and gradient depending on the value range of pollutants. Here, some animation for min/max detection can enhance the understanding of ingested data

  • Textures of the shapes give a more realistic representation of the pollutant (static or animated)

  • Parameter influencing the base of the shape (mathematical parameters and formulas)

  • Selection of static or animated effects (as clouds, fogs, fire etc.)


Not all conceivable visual representations are meaningful and can be implemented in the first version of the COMPAIR framework and therefore DEVA. The app will likely offer only a couple of visual solutions and parameters for both representations in the beginning. However, thanks to the flexible pipeline, the system can be easily extended in the future, on-demand by the project partners and as a result of the various usability and user experience tests together with the pilot partners.


Equaliser

To simplify the setup of those parameters, a concept of parameter-equaliser is proposed. Similar to an HiFi equaliser, the user can adjust the “importance” for the different sensor types: A higher value means more data represented by larger geometrical objects; a lower value means less data (or aggregated data) and smaller objects.


In Figure 2, the temperature “importance” value is higher than that for PM1 and PM2, so the system can focus more on 3D representations of temperature observations. A value change can be directly visible in the see-through window bringing the important objects (observations of the corresponding sensor type) in the focus of the user (bigger size, text value etc.) and making less important objects smaller or transparent.


Augmented reality visualisation of air quality in DEVA

Figure 2: Example of a setup for the equaliser


Beside pollution measurements, additional information that may be of interest to the user includes information on the sensors themself e.g. manufacturer (see Figure 3, left). Other means of visualisation can also be included such as small histograms as depicted in Figure 3 (right). This information can be placed on the top of the see-through area or three-dimensional where the data are located. Bringing windows closer to the user or on the top permits the user to interact with objects and UI elements directly via the touchscreen.


Augmented reality visualisation of air quality in DEVA

Figure 3: Examples of data visualisations for specific sensor information (left) or histogram visualisation, which is just a schematic representation currently (right)


Visualisation concepts of near and far data in augmented reality

The sensor data provided by the citizen science community are of a specific nature, which requires some analysis. The aim of COMPAIR is to visualise air pollution and traffic data provided by DIY sensors distributed in some local communities. Although the project will offer a significant amount of sensors to our pilot partners, the distribution of sensors per several dozens of square metres will be limited. On other hand, the AR device offers a view in the near surroundings depending on the location of the user. This may be a wide field of view or a view obstructed by buildings, vegetation and objects (environment).


Near-n-far

Due to this, we face the fact that only a few measurements from sensors will be available in the near surroundings. Therefore, any AR visualisation must offer capabilities to display information, which is further away and possibly not in the visible space. Hence, we developed a Near-n-Far concept that is based on an AR space supporting a visualisation of data in the near, middle and far distance.


In augmented reality, the identification of the environment to detect overlap and occlusion between real and synthetic objects is an important topic, but puts a lot of technology and new issues into play (by loading 3D city maps or activating some plane/object recognitions in the AR process). Here, the three proposed solutions (outlined below) attempt to solve these problems in a convenient manner. DEVA will allow the user to quickly activate the different solutions and change their parameters. During the upcoming user evaluations, we will identify the most appropriate concept.


Dome

The first concept is a-dome-like presentation as depicted in Figure 4. It is based on an invisible half-sphere around the user. Different data are visualised depending on their proximity to the user, e.g. closer data are arranged in the lower part of the half-sphere, while distant data are located higher-up on the half-sphere. Depending on the environment and needs of the user, the perimeter of the sphere should be adapted. In a dense city location, a smaller sphere should be more suitable.


The height of the visual used by the projection onto the surface is calculated in relation to the distance of the values to the user (using the GPS coordinate of the user and the measurement location). So the system can adjust the height according to the distance and avoid overlapping objects. The bottom of the sphere stays empty. Other attributes like size, colour, transparency or visual effect can be added as desired.


Dome representation in augmented reality

Figure 4: Dome representation


Theatre

Another possibility is an amphi-theatre like presentation, where data is arranged in different heights and with different sizes depending on their distance to the user.


Similar to the Dome projection, the Theatre mode offers the user a possibility to automatically arrange data while avoiding overlap. The near data remains in their original position and distant data is shifted along a curve (see Figure 5). The further away a visual from the user is the higher it will be moved. But, in comparison with the Dome representation, the data is not projected onto a surface: only their height position changes. Other attributes like size, colour, transparency or nice visual effects can be added as desired.


Amphi-theatre representation in augmented reality

Figure 5: Amphi-theatre representation


Ring

The third option is the so-called Ring representation (see Figure 6). The user is surrounded by a virtual ring and an optional compass, where north-south directions are assigned. This so-called data ring contains information about sensor data further away. The user can rotate the data ring (compass) and get information about far data in any direction.


Ring representation in augmented reality

Figure 6: Ring representation with the (optional) compass


Here, pollutant and traffic data are projected onto a geometry model of a ring, surrounding the user as a swimming ring. Other shapes are possible. In comparison with the two previous methods, the original 3D visuals remain in the AR scene and would be highlighted by a user selection on the ring, in parallel. Consequently, the user can always view the real data and the projected one. If too much data needs to be presented on the ring, then a filter shall be used e.g. distance or view frustum filter. Other attributes like size, colour, transparency or nice visual effects can be added as desired.


Interaction concept

DEVA is programmed to run on mobile devices, whose basic interaction is touch interaction in various ways. To design interactions, Unity incorporates a flexible system, called Input System, to define and map user actions (touch, gamepad, keyboard, mouse, etc.) in method calls. The touch interface implements all possible combinations of touch interactions for 1 up to 10 fingers.


Several different interaction forms are considered for DEVA:

  • Touch on a specific point and select (one finger). This is usually used to operate the GUI or to select individual 2D/3D objects via ray-casting

  • Pinch by bringing the thumb and the pointing finger closer together (and then moving them away from one another if necessary). This is usually used for zoom-in/zoom-out functionality or to define regions for multi-selection or grouping

  • Slide or strip over the screen (one finger). This is used to move a window or objects over the screen


3D-Beam

A more sophisticated possibility is the 3D beam interaction (see Figure 7). Depending on the orientation of the mobile device, a so-called beam or laser pointer allows the user to select different data in the augmented 3D space. A final click provides the detailed information about the selected object or triggers further interaction. The beam function would be initiated as a normal one-finger pointing in combination with a time delay (0.5-1 seconds). The beam stays active until the user removes the finger from the screen. The object is “virtual touched” by the beam and shall be highlighted to inform the user of the focus.


3D beam interaction in augmented reality

Figure 7: 3D beam interaction


In order to avoid overlap between the finger on the screen and the rendering in the see-through area, it would be desirable to define a dedicated location (a round button, a beam “hot-spot”) where the user has to put the finger to interact. From the design side, this would be similar to the control elements from mobile games, where the gamepad is simulated onto the screen.


For an inexperienced user all this may sound quite abstract. But we promise that things will be much clearer when you run the app on your phone. In the coming months DEVA will be available on Google Play and App Store. Subscribe to our newsletter now to be the first to learn about this.


66 views
bottom of page