As part of our ongoing work in integrating the physical and virtual worlds, the concept of Smart Spaces where the physical world is instrumented so that it can influence the virtual world, is becoming more central to our research focus.
This document records some of the ideas which are evolving in this space.
When considering a smart space the range of features that it could possess is vast. This section identifies a number of scenarios which describe smart spaces, and their relevant features.
You have chosen to have a throat microphone and bone-conduction earpieces implanted for convenience. These small devices are inserted just below the skin (and for the earpieces, next to the bone), leaving only tiny scars. They communicate using either short range radio or skin-conductive networks, and are powered by your own body heat.
These can be prototyped using standard "spook" gear. A pager-sized device would be needed to drive the audio and do the associated processing.
The House network consists of more machines than you can remember, but is driven by the main server in the basement. One of its very minor functions is to act as a general reminder service, in this case like an alarm clock.
It also has a stored voice profile for each of its residents, and uses this to perform accurate voice recognition even when you're half asleep.
This will be harder to prototype, but not too difficult. I imagine that one of the commercial dictation system will support a COM/DCOM interface, which we can then map into CORBA fairly easily. Using a central service like this, we can train individual profiles.
The bathroom has been instructed by the House to start the shower using your preferred temperature. The shower cabinet door is locked until the water is within the safe temperature zone.
Again your throat mike and the House computer cooperate to recognise your speech. They negotiate with your scheduler, and your personal preferences manager selecting today's events and reading them to you in your preferred format and voice.
An audio tickertape! I haven't looked at text to speech software for a couple of years, but it was almost adequate then. This kind of thing should give us a rich vein of conference papers on alternative modalities experience! This work could start imediately.
As with most rooms in the house, your bathroom has a wallscreen. It's principle role is to act as a smart mirror, allowing you to see yourself with various effects (hair, clothes, makeup, jewellery, etc) but it is also a good place to scan the news.
Wallscreens consist of a large colour LCD panel, a touch sensitive layer, built in microphone and speakers (not used in the case) all controlled by a small processor. The wallscreen processor uses other services on the House network to parameterise the display according to your preferences and needs (for example, correcting for colour deficiencies in your vision).
This is quite close to the model used in Jini -- devices that we tend to expect as part of a "computer" become independent, and applications negotiate for their use along with other resources. For graphics, it's a model not far removed from X ;-)
Prototype wallscreens would require an LCD monitor, video camera and speakers, with a reasonable processor and network link. They are certainly at the higher end of "actuator" class devices, well beyond PIC class processing.
Revising the infrastructure to support a world where a user's session moves constantly with the user, utilising (and often sharing) resources nearby will take some work. As an example, X currently has a simple user-to-display assumption and current window managers are designed only for individual desktop use.
Our outcomes will include window managers for shared wallscreens, audio managers for shared speakers, input controllers for shared touchscreens, etc. Physically however, there appear to be no problems with implementing these devices using COTS hardware.
Your implanted I/O devices are dedicated devices, and rely on the House (or some other cell) to perform general-purpose computation for them. Your personal computer is a medium sized device which takes over some functionality from the implants.
For example, it allows them to switch to using a skin conduction network, rather than radio, a great saving on power. You can then use your implants outside of a support environment.
Your personal computer manages your interface with the information environment, hosting your preferences and mitigating your interactions. It is also your "bodyguard" -- providing a level of security when outside a trusted environment. It can peform encryption and strong authentication.
The House, your PC and your implants dynamically and seemlessly reconfigure their interactions while applications continue to run. Some functions migrate from the House to your PC, ready for the day's work.
Smart spaces require computers to implement their intelligence. Traditional workstations/PC-style devices are too big to support the discovery of what happens when your coffee mug is "smart".
We're doing some preliminary investigation into the suitability of various platforms for our research. There are several attributes of interest:
Some devices are able to operate using novel power sources -- for example, the IRX2.0 PIC board from MIT has been run using solar cells powered from flourescent lights.
In the more speculative realms, the use of body heat and motion (like watches) as means of generating power could be used, or at least suggested if we are unable to marshal the hardware resources to actually implement such devices.
Smaller devices are typically capable of support relatively slow-speed serial connections, using either wired or wireless (typically infra-red) links. The implemented protocols however tend to be very simple, with poor support for error handling, congestion, etc.
In order to empower the full range of devices in a typical space, we must have strategies for supporting both conventionally networked devices (wired or wireless TCP/IP) and simpler interfaces.
It seems that a reasonable amount of processing, storage and power can be obtained cheaply in packages approximately 5x5x1cm (2x2x0.5"). Most devices at this scale use infra-red networking with limited capabilities, and are based on a RISC 8bit CPU with a few K of RAM/FLASH, running on standard 9v batteries. We can realistically expect these devices to shrink by a factor of five within 5 years, and using mass production manufacturing.
For want of a better term, I'll call this size device a pico-computer.
A second level of scaling can be utilised with devices around 10x10x5cm. The PC-104 standard provides a modified PC architecture in a robust physical package. A full range of functionality is available, with a typical system being a 486-class CPU, several MB of RAM, an ethernet card and a 20M FLASH disk. Such systems can also run a real OS, including several Open Source options.
Again, time and manufacturing technology will likely make these devices significantly smaller and more powerful within 5 years. I'll term these nano-computers.
PC and workstation machines are traditionally categorised as micro-computers. Their size will probably remain reasonably constant, being constrained by DVD ROM devices, expansion card standards, etc. Their power will increase by orders of magnitude.
We have several integration points between the smart spaces hardware and the rest of our information environment. Foremost amongst these is networking, but we also have a number of Palm Pilot devices, a number of Apple Newton devices, and we're likely to acquire some WinCE devices.
Integration with the Palm Pilot is basically a matter of networking. While the standard cradle provides a wired option, perhaps the best approach is to attempt a small add-on board which provides IR connectivity.
Our current Newton devices use the Photonics Cooperative network. This is an infrared Appletalk-based system, where a number of base stations are wired (normally daisy-chained) to the backbone network via an ethernet bridge (GatorBox CS). The Newtons use a transceiver connected to their localtalk port, and can support both Appletalk and tunneled IP applications.
We have approximately 200 of these transceivers, but they are no longer manufactured. Support from the vendor is very limited.
A number of possible platforms are being investigated. This section will continue to grow as further work is done.
The link layer protocol LLAP must be implemented by the device. It requires some functionality which was implemented by Apple using the Zilog 8530 SCC chip. Zilog publishes an application note which describes such an implementation in some detail.
To utilise the tranceivers on hand, one possible strategy would be to fabricate a small board (say 5x5cm) with a Zilog micro-controller, up to 1M RAM, a prototyping area with digital and analog I/O and a LocalTalk port for connection to the IR transceiver. Constrained by the size of the transceiver, such a device would be approximately 7x8x4cm (or 2.5x3.5x1.5 inches) in total (including batteries).
POSIX APIs are available.