Manuel Román, Brian Ziebart, and Roy H. Campbell
Computer Science Department
University of Illinois at Urbana-Champaign
Abstract
Proliferation of wireless networks, large displays, handheld devices.
Rooms equipped with such devices become execution environments.
These environments should be more than mere execution environments, they should be programmable spaces with a customizable behavior.
We call these environments active spaces.
In this paper we present in infrastructure that allows dynamic application composition, which provides the tools for space behavior customization.
1. Introduction
Future ubiquitous computing will surround users with a comfortable and convenient information environment that merges physical and computational infrastructures into an integrated habitat. Context-awareness [1-4] should accommodate the habitat to the user preferences, tasks, group activities, and the nature of the physical space. We term this dynamic and computational rich habitat an active space. Within the space, users interact with flexible mobile applications, define the function of the habitat, and customize its behavior according to different properties (e.g., personal preferences and current context). An active space is an integrated programmable environment that contains heterogeneous network connected devices, services, and applications coordinated by a context-aware distributed software infrastructure, and populated by a number of people performing different activities.

Active spaces host the execution of different applications2. For example, an active meeting room has applications to control the lights and the audio, present information in a ticker tape, control a slideshow, and track the number, identity and position of the people present in the room. According to our experience with a prototype active meeting room (Figure 1), the potential of active spaces lays on the ability to orchestrate a number of individual applications, therefore conferring the active space a specific behavior. We identify three functional levels we consider essential to abstract a physical space and the resources it contains as a single homogeneous programmable environment: low-level, which provides basic functionality including component management and resource discovery and is comparable to the functionality provided by traditional operating systems; application level, providing frameworks and tools to build applications; and active space behavior level, which includes mechanisms to orchestrate the interaction among applications and therefore provides functionality to program the behavior of the active space.
Existing research projects [5] [6] [7] [8] address the low-level and application-level functional issues but do not provide explicit support for active space behavior definition. We present in this paper an infrastructure to program the behavior of active spaces. The infrastructure simplifies the creation of customizable and dynamically-adaptable inter-application interaction rules that define how changes in an application affect other applications. We currently use the infrastructure to define interaction rules among six applications (i.e., audio cues, slide show manager, light controller, audio player, ticker tape, and location) running in our active space prototype. The results are encouraging and we have experienced a qualitative improvement in the global usability of the active space. Furthermore, it is possible now to perceive the active space as an interactive environment with a well-defined behavior instead of an execution environment consisting of disconnected applications.
The rest of the paper is organized as follows: section 2 describes the three functional levels of an active space, including low-level (section 2.1), application-level (section 2.2), and behavior-level (section 2.3); section 3 presents a detailed example of a ticker tape and a location application that use the bridging mechanism to interact, section 4 describes additional application composition examples; section 5 presents related work and we conclude the paper and describe our future work in section 6.
2. Active Space Functionality Levels
We have developed a meta-operating system called Gaia OS [9] to manage active spaces. Gaia is a distributed middleware infrastructure we refer to as a meta-operating system[10] that coordinates software entities and heterogeneous networked devices contained in a physical space. Gaia exports services to query and utilize existing resources, to access and use current context, and provides a framework to develop active space aware applications. Gaia OS is composed of three building blocks: Gaia OS Kernel, Gaia Application Framework, and Gaia Application Level.
2.1 Active Space Low-Level Functionality
The Gaia OS Kernel provides services for location, context, events, and repositories with information about the active space. The system is built as a distributed object system that extends the notion of an execution environment associated to devices to the space level. The kernel provides also functionality to manage remote components (e.g., create, destroy, load, unload, and transfer). Gaia OS abstracts the active space as a programmable execution environment.
The Gaia OS Kernel implements the active space low-level functionality and it is comparable to the functionality provided by traditional operating systems (e.g., process management, file system, and inter-process communication).
2.2 Active Space Application-Level Functionality
Gaia applications use a set of component building blocks, organized as the Gaia Application Framework [11], to support applications that execute within an active space. The framework provides mobility, adaptation, context-awareness, and dynamic binding. The functionality permits commercial off the shelf as well as new applications to run in the active space. The application framework models applications as a collection of distributed components, and reuses some concepts from the Model-View-Controller[12]. The framework exploits resources present in the application environment, provides functionality to alter the application composition dynamically (i.e., number, type, and location of the application components, as well as data format they manipulate), is context-sensitive, implements a specialization mechanism that supports the creation of active space-independent applications, and provides functionality to manage the application lifecycle (i.e. instantiation, adaptation, suspension and resumption, fault-tolerance, termination, and mobility).
The application framework infrastructure is composed of five components (Figure 2): model, presentation, controller, input sensor, and coordinator. The model, presentation, controller, and input sensor are the application base-level building blocks and are strictly related to the application domain functionality.
The model implements the logic of the application and exports an interface to access and manage the application's state. The model maintains a list of registered listeners and it is responsible for notifying them about changes in the application's state, therefore keeping them synchronized.
The presentation transforms the application's state into a perceivable representation, such as a graphical or audible representation, a temperature or lighting variation, or in general, any external representation that affects the user environment and can be perceived by any of the human senses. Presentations are listeners that are dynamically attached and detached to and from the model. When the model’s state changes, the model notifies all presentations so they can synchronize their internal state.
The input sensor is the component responsible for changing the state of the application. Input sensors can be user-interactive (e.g., GUI and speech-recognition) and non user-interactive (e.g., context synthesizers), and they interoperate with the model’s interface to alter the state of the application. When the model gets a notification from an input sensor, it automatically sends a notification to all registered listeners.
The controller is a component that mediates the interaction between the input sensor and the model. It translates requests from the input sensor into method calls customized to the model, therefore maximizing input sensor reusability. The same input sensor can be used with different applications by changing the mappings stored in the controller dynamically (Figure 3).
T he coordinator encapsulates information about the application components' composition (i.e., application meta-level) and provides an interface to register and unregister presentations and inputs sensors. The coordinator provides also functionality to retrieve run-time information about the application's components composition. The functionality provided by the coordinator offers fine grained control over the application internal composition rules. This behavior contrasts with traditional MVC applications that define the composition rules for the application components statically - what views to connect to the model and what controllers to use with the views.
2.3 Active Space Behavior-Level Functionality
The application-level functionality provides five components to support the development of active-space aware applications. However, resulting applications are disconnected execution units. The application framework defines an additional component called application bridge that allows defining interaction rules among applications. These interaction rules specify how changes in an application affect the execution of other applications and therefore make it possible to program the behavior of the active space.
The active space behavior-level functionality is characterized by three key properties: it does not require any changes in the applications involved in the interaction, it is independent of the functionality implemented by the connected applications, and allows defining and modifying the interaction rules at run-time.
The application bridge (Figure 4) is built as an input sensor that listens for notifications from the source application and introduces changes in the target application by invoking methods on the model via the controller.
Figure 1. Application Bridge
The bridge implements functionality to execute user-defined rules that affect the state of the target application’s model when it receives a notification from the source application. The mechanism to trigger the execution of the user-defined rules is common to all bridges while the rules defining what actions to take are bridge-dependent and are implemented as scripts that are passed to the bridge at instantiation time. The script for the bridge receives a reference to the source application’s model, a reference to the target application’s controller, and the source application notification’s hint (notification sent by the source application’s model to inform about changes in its state). Users write a script using these parameters to define the interaction rules. The bridge executes the script each time it receives a notification from the model. Figure 5 illustrates the interface of the script.
Figure 2. Application Bridge Script Interface.
3. Using a Ticker Tape to Display People Location
In this section, we include an application composition example. We describe two applications in detail (location and ticker tape) and explain how we use the ticker tape to display location information.
3.1 Ticker Tape Application
This application provides support for displaying scrolling items sequentially across multiple display devices (Figure 6). The ticker tape serves as an input/output interaction mechanism within an active space. Unlike traditional stock quoting ticker tapes, our ticker tape displays multimedia items, including graphics, and allows assigning specific actions to the scrolling items. Items displayed in the ticker tape can be selected, and they trigger user defined actions, including launching additional applications, or modifying the state of existing applications.
O
Figure 3. Ticker Tape Item
ne main characteristic of the ticker tape is the synchronous and dynamic utilization of multiple display devices. Applications in an active space are not confined to one display device; therefore, a ticker tape item (e.g. text and pictures) displayed in an active space is rendered on multiple devices. When a ticker tape item reaches the edge of one display, it is immediately displayed in the next display. In addition, components in an active space are often mobile, so the ticker tape must be able to respond to devices entering, exiting, and changing location within the active space by attaching, detaching, and re-ordering ticker tape items.
The Ticker Tape is composed of four components: Model, Display Listener Input Sensor (LIS), Sequencer LIS, and Coordinator. The Ticker Tape implements the first three components and reuses the default Coordinator implementation provided by the application framework.
The Ticker Tape Model is responsible for orchestrating the sequential handling of scrolling items across the different displays used by the application. The model associates an index to each scrolling item, and stores an ordered list of ids for each ticker tape input sensor running in each display, so it can dispatch notifications to the appropriate input sensor when an item needs to be displayed. It also contains functionality for adding, updating, and removing scrolling items. A scrolling item is stored in the model as a set of attributes, including size, color, font and content of text, the path location and size of pictures, and other attributes to determine how items are rendered and displayed by the display components.
The Ticker Tape Display Input Sensor (TTDIS) is responsible for displaying scrolling items in a display when the model sends the appropriate notification, and notifying the model when its scrolling item reaches the edge of the display so that the next TTDIS can be notified to display the item. In addition, the TTDIS is responsible for detecting and notifying the model when users select a certain scrolling item so that the model can execute any functionality associated with that item. Upon receiving a notification from the model to display a scrolling item, a TTDIS checks if the notification is intended for it. If so, it requests the set of attributes associated with the item from the model, then renders and displays the scrolling item.
The Ticker Tape Sequencer Input Sensor (TTSIS) is a tool that allows users to change the ordering of the displays used by the ticker tape. It receives the current ordered list of displays from the model and allows a user to input a new ordering. Currently, the displays can only be sequenced manually, although once more advanced proximity location services are deployed in Gaia it will be possible to automate sequencing based on device location data.
3.2 Location Application
The location application provides functionality to track people inside our computer science building. The application relies on sensor data provided by the active space low-level functionality (Gaia Kernel) to detect the position of the users. Current implementation of the Gaia location service provides information at room granularity. That is, we can detect whether or not a user is present in a room, but not where in the room the user is located.
The location application implements three components, Location Model, Location Presentation, and Location Input Sensor, and reuses the default coordinator.
The Location Model provides functionality to store and update information about users and their locations and provides an interface to query about user location. The model stores information about the user name, the name of the space where he or she is located, and the date and time the user entered and left the space.
The Location Presentation is a graphical presentation that displays information about user location. Users can select a user name and get updated information about his or her position, or can select a space and learn about people located in the spaces.
The Location Input Sensor registers with the person discovery channel to learn about users entering and leaving the space. When a user enters or leaves, a message is posted to the person discovery channel, and the location input sensor sends an event to the model via the controller. There is one instance of the input sensor for each active space.
Figure 7 illustrates the composition of the location application running in our building. We define three active spaces: domain, 2401, and 3231. These three active spaces are hierarchically organized as a tree, with the domain at the root and 2401 and 3231 as leaves. The coordinator, model, and controller of the application run in the domain active space, and 2401 and 3231 host the execution of the location presentation and location input sensor. When a person enters 2401 or 3231, the input sensor sends a notification to the model running in the domain via the controller (steps A and B in Figure 6), which notifies the presentations (steps C and D). Tracking people in additional active spaces in the DCL is simple. It requires instantiating an input sensor and attaching it to the model running in the domain active space.
Figure 4. DCL Active Space hierarchy (left) and corresponding location application instance (right).
3.3 Using the Ticker Tape to Display Location Information
In this section we explain how we use the ticker tape to display information about the location of users. Figure 8 illustrates the ticker tape application and the location application connected by a bridge. The script with the interaction rules is depicted in Figure 8. We describe the functionality based on an example consisting of a user (Andrew) entering an active space (2401).
Figure 5. Ticker Tape and Location Bridging
When the user enters the active space, the input sensor of the location application calls a method on the model to report the new user (Andrew) entering active space 2401 (A). The location model updates its data structures to reflect the new location report and notifies all of its listeners with the message “andrew has entered 2401,” (B). The location-to-ticker tape bridge parses the username, “Andrew” from the message and calls a method to create a new scroll item in the tickertape with text (“Andrew has entered 2401”) and a picture (“users/andrew/andrew.jpg”) (C). The controller receives the message, checks for a mapping, and since no mapping has been defined, it simply forwards the request to the Ticker Tape Model (D). The ticker tape model stores all the fields for the scroll item and notifies all listeners that a new scroll item is available for display on the first display according to its internal display list. The model sends a notification containing a string with the index number of the new item and the id of the ticker tape display input sensor (E, F). The id assigned to the input sensor in the forefront of the figure matches the one included in the notification, so the input sensor calls a method on the ticker tape model to retrieve the scroll item fields (G). The input sensor uses the Gaia file system to retrieve the “andrew.jpg” image that is stored in the user personal profile, which is mounted automatically when a user enters an active space (the image is stored in a remote active space). Next, the input sensor renders the item using the attributes contained in the item structure and scrolls it across the display. When the scroll item reaches the left side of the display, the input sensor calls a method on the controller to notify that the next input sensor has to begin displaying the item (H). The controller receives the message and forwards it to the ticker tape model (I). The Ticker Tape Model notifies all listeners with a message containing the display id of the next input sensor in the model’s internal display list (J, K). This time, the input sensor in the background has the correct id, so it calls a method on the Ticker Tape Model and follows the same steps as the previous input sensor.
1. function(targetController, sourceModel, sourceEvent)
2. local pos = strfind(sourceEvent," ")
3. name = ""
4. if (pos~=null) then
5. name = strsub(sourceEvent,1,pos)
6. end
7. targetController:defaultSetItem(sourceEvent, name+”.jpg")
8. end
Figure 6. Location to ticker tape application bridge script.
Lines 2-6 in the script depicted in Figure 9 parse the location model’s notification and extract the name of the person entering or leaving a space. Line 7 sends a request to the ticker tape model (via the controller) to create a new scroll item consisting of text (the source event) and a picture (the name of the file matches the name of the user).
4. Additional Application Composition Examples
We have eight additional applications running in our prototype space: music player, text to speech, x10 appliance controller, calendar, slide show presentation manager, scribble, picture viewer, and a PDF viewer. All of them are built using the functionality provided by the application-level.
We have implemented a bridge to connect the music application to the ticker tape, so the title of the current song is automatically displayed in the ticker tape. We also have a bridge that connects the slide show manager application to the x10 appliance controller. We have rules that switch off the main lights of the room and switch on two auxiliary lamps when the user moves to the second slide. When presenting slides, most of the times the presenters wait for all people and introduce themselves while the first slide (title slide) is being displayed. Therefore, when they move to the second slide the presentation really starts (that is when we change the lighting conditions). The bridge also detects when the presenter reaches the final slide and automatically restores the original lighting status.
One of the latest additions to our prototype active space is a positive feedback mechanism that notifies about changes in the status of the active space. We built this mechanism using a channel listener application, a text-to-speech application, and a bridge. The channel listener application registers with system event channels (low-level functionality) to learn about users entering and leaving the space, devices added to and removed from the space, and applications started and terminated in the space. The text-to-speech application receives text and converts it into audio. The bridge receives notifications about changes in the active space status from the channel listener application, selects an appropriate message, and sends it to the text-to-speech application so it can read it. In our experience, this positive feedback mechanism has greatly improved the usability of the system. The text-to-speech application provides a non intrusive mechanism that allows users to concentrate on their tasks while they get background notifications about changes in the space.
Finally, we built a bridge to connect the ticker tape to a picture viewer application. The main difference with the previous applications using the ticker tape is that in this case the ticker tape is used as an input mechanism. The compound application presents a collection of picture thumbnails in the ticker tape. Users can select any of the pictures and as a result the picture viewer application displays the picture maximized in a plasma display. When the user selects a picture, the ticker tape display input sensor sends a notification to the model. The application bridge receives the notification and sends the name of the selected picture to the picture viewer application.
5. Related Work
There are a number of projects [5] [6] [7] [8] [13] that provide a software infrastructure for ubiquitous computing environments. The closest to Gaia are [7] and [8] in that they consider physically bounded spaces such as offices and meeting rooms. Only [7] provides an application framework (based on MVC), although it is customized for collaborative applications involving document based applications. Our approach provides a generic active space application framework with support for both collaborative and non-collaborative applications. Furthermore, it provides support for inter-application interaction.
The concept of application bridge is similar to scripting languages such as LuaOrb, which implements language bindings between Lua and CORBA, COM, and Java. LuaOrb simplifies the coordination of existing components. The application bridge reuses existing applications and defines coordination rules among these applications.
Cooperstock et al. [14] propose a software infrastructure to manage computer-augmented environments, including videoconference environments. They mention the difficulty of using these spaces due to the large amount of devices, and propose a system that adapts automatically and reacts to certain user actions. The application bridging mechanism described in this paper provides the tools to customize the reaction of the active space. The framework described by Cooperstock et al. is customized for a specific type of environment, while application bridges can be used in different environments.
6. Conclusion and Future Work
In this paper, we discuss the relevance of active space application interaction as a mechanism to customize the behavior of active spaces. We present a mechanism called application bridge to implement interaction rules among applications, and describe our experience with a number of applications that use the mechanism. Application bridges do not require modifications in the applications, are independent of the functionality implemented by the applications, and can be attached and modified dynamically.
Current results show that application interaction provides an effective mechanism to customize the behavior of active spaces. The ability to reuse existing applications unmodified and defining the interaction rules as scripts allows us to easily obtain new functionality by defining different interaction rules. Furthermore, defining new interaction rules is fast and does not require extensive programming knowledge. For example, the bridge to connect the slide show manager to the x10 application was built and deployed in around five minutes, and the scripts contains around twenty lines of code.
As users of the prototype active space, we have clearly observed a great change since the installation and utilization of the bridges. Before using the bridges the active space was simply a hosting environment for active space-aware applications. However, after using the bridges, we perceive the active space as a reactive environment with some well defined behavior. The results are encouraging because a current bridges are fairly trivial and therefore there is still room for further experimentation (e.g., AI techniques) and improvement.
As part of the future work we plan to continue experimenting with new bridges to integrate new applications and define new interaction rules. We plan to develop more sophisticated bridges that leverage the low-level functionality provided by the Gaia OS (e.g., context, presence, and security). All current bridges alter the base-level functionality of the target application (application functionality domain). We plan to extend our experiments with bridges that affect the meta-level of the target application (interacting with the coordinator of the target application). For example, a bridge between the location and the music application can move the audio from the room speakers to the user’s laptop when it detects that the user is not alone, and can move the audio back to the room when everybody else leaves.
Finally, we also plan to build an additional mechanism to allow non-computer science users to define interaction rules. We plan to display currently running applications, and for each application a list of notifications and methods that the application implements. Users can choose an application pair (source and target) and select the actions to take on the target based on notification fired by the source.
7. References
[1] J. I. Hong and J. A. Landay, "An Infrastructure Approach to Context-Aware Computing," Human Computer Interaction, vol. 16, 2001.
[2] A. K. Dey, D. Salber, and G. D. Abowd, "A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications," Human-Computer Interaction (HCI), vol. 16, pp. 97-166, 2001.
[3] M. Korkea-aho, "Context-Aware Applications Survey," Helsinki University of Technology, Helsinki, Internetworking Seminar April 25 2000.
[4] B. N. Schilit, N. Adams, and R. Want, "Context-Aware Computing Applications," presented at IEEE Workshop on Mobile Computing Systems and Applications, 1994.
[5] B. Brumitt, B. Meyers, J. Krumm, A. Kern, and S. Shafer, "EasyLiving: Technologies for Intelligent Environments," presented at Handheld and Ubiquitous Computing (HUC), Bristol, England, 2000.
[6] J. P. Sousa and D. Garlan, "Aura: an Architectural Framework for User Mobility in Ubiquitous Computing Environments," presented at IEEE Conference on Software Architecture, Montreal, 2002.
[7] P. Tandler, "Software Infrastructure for Ubiquitous Computing Environments: Supporting Synchronous Collaboration with Heterogeneous Devices," presented at Ubicomp 2001: Ubiquitous Computing, Atlanta, Georgia, 2001.
[8] B. Johanson, A. Fox, and T. Winograd, "Experiences with Ubiquitous Computing Rooms," IEEE Pervasive Computing Magazine, vol. 1, pp. 67-74, 2002.
[9] M. Roman, C. K. Hess, R. Cerqueira, A. Ranganat, R. H. Campbell, and K. Nahrstedt, "Gaia: A Middleware Infrastructure to Enable Active Spaces," in IEEE Pervasive (Accepted for Publication), 2002.
[10] F. Kon, R. H. Campbell, M. D. Mickunas, K. Nahrstedt, and F. J. Ballesteros, "2K: A Distributed Operating System for Dynamic Heterogeneous Environments," presented at 9th IEEE International Symposium on High Performance Distributed Computing, Pittsburgh, 2000.
[11] M. Roman and R. H. Campbell, "A User-Centric, Resource-Aware, Context-Sensitive, Multi-Device Application Framework for Ubiquitous Computing Environments," University of Illinois at Urbana-Champaign, Urbana, CS Technical Report UIUCDCS-R-2002-2284 UILU-ENG-2002-1728, July 2002 2002.
[12] G. E. Krasner and S. T. Pope, "A Description of the Model-View-Controller User Interface Paradigm in the Smalltalk-80 System," ParcPlace Systems, Inc., Mountain View 1988.
[13] R. Grimm, J. Davis, E. Lemar, A. McBeath, S. Swanson, S. Gribble, T. Anderson, B. Bershad, G. Borriello, and D. Wetherall, "Programming for Pervasive Computing Environments," University of Washington, Technical Report: UW-CSE-01-06-01, Washington 2001.
[14] J. R. Cooperstock, Sidney, S. Fels, W. Buxton, and K. C. Smith, "Reactive Environments: Throwing Away Your Keyboard and Mouse," Communications of the ACM, vol. 40, pp. 65-73, 1997.
|