Skip to main content
  • Research Article
  • Open access
  • Published:

“Twhirleds”: Spun and whirled affordances controlling multimodal mobile-ambient environments with reality distortion and synchronized lighting to preserve intuitive alignment

Abstract

The popularity of the contemporary smartphone makes it an attractive platform for new applications. We are exploring the potential of such personal devices to control networked displays. In particular, we have developed a system that can sense mobile phone orientation to support two kinds of juggling-like play styles: padiddle and poi. Padiddling is spinning a flattish object (such as a tablet or board-mounted smartphone) on the tip of one’s finger. Poi involves whirling a weight (in this case the smartphone itself) at the end of a tether. Orientation of a twirled device can be metered, and with a communications infrastructure, this streamed azimuthal data can be used to modulate various distributed, synchronous, multimodal displays, including panoramic and photospherical imagery, diffusion of pantophonic and periphonic auditory soundscapes, and mixed virtuality scenes featuring avatars and props animated by real-world twirling. The unique nature of the twirling styles allows interestingly fluid perspective shifts, including orbiting “inspection gesture” virtual cameras with self-conscious ambidextrous avatars and “reality distortion” fields with perturbed affordance projection.

Background

Mobile position sensing

We are exploring the expressive potential of personal devices for social, multimodal displays: “multimedia juggling.” We originally experimented with using laptop sudden motion sensor control (Cohen 2008),1 but eventually settled on tracking with mobile devices. Modern smartphones and tablets feature various position sensors, including triaxial gyroscope (to measure orientation) and accelerometer (to measure changing velocity), GPS sensor (to determine location), barometer (which which to infer altitude), and magnetometer (to gauge horizontal bearing). We describe a system here that basically senses orientation to modulate ambient media. “Twhirleds” (for ‘Twirled Whirled Worlds’) is a rotation-sensing mobile application using juggling-style affordances, supported by wireless network communication and a distributed suite of multimodal and mixed reality clients.

Azimuthal tracking especially allows control of horizontally expressive displays, including panoramic imagery, spatial sound, rotary motion platforms, positions of objects in mixed virtuality environments, as well as rhythmic renderings such as musical sequencing. Readers are encouraged to watch videos describing this project:

  • “ ‘Twhirleds’ for iOS and Android”2

  • Padiddle and Poi Rigs: spinning and whirling control of photospherical browsing”3

Twirling “flow arts”: padiddling (spinning) and poi (whirling)

Flow arts are whole-body activities that integrate aspects of dance, juggling, and performance art. We modernize such activities by integrating them into computerized infrastructures, exposing them to internet-amplified multimedia. The ubiquity of the smartphone makes it an attractive platform for even location-based attractions. Using only internal sensors, orientation of a smartphone can be used to modulate ambient displays. Such adjustment can be egocentric or exocentric— controlling, that is, either the position of a subject, in which case an entire projected space shifts, or the position of scene objects, in which case only particular entities are moved. Embedding mobile devices into twirling affordances allows “padiddle”-style interfaces, spinning flattish items, and “poi”-style interfaces, whirling tethered items, for novel interaction techniques, as elaborated below.

Padiddle spinning

Padiddling is the spinning of flattish objects (such as flying discs, books, pizza dough, plates, signs, etc.) on one’s fingertip. Once a distraction of college students, the eclipse of vinyl record albums by CDs and then internet streaming deprecated an appropriate affordance, and padiddling is a disappearing art. (Padiddling is similar to, but different than, “nail delays,” as performed by flying disc freestyle players4 (Wham-O 2010)). Nowadays, padiddling skills are cultivated by specialist performers, often selectively recruited at a young age and trained intensely for juggling and acrobatic shows, in, for particular example, China (He 2009), as seen in Fig. 1, and spectacles such as Cirque du Soleil (Béïque and Dragone 2001).

Fig. 1
figure 1

Don’t bother trying this at home: Chinese acrobats spinning umbrellas

We are experimenting with embedding mobile devices into suitable affordances that encourage padiddling. While it is possible to “free spin” a computer tablet, such a skill is not easily learned (or extended to switch hands or spinning direction), and is impossible to perform with any normal-sized mobile phone.

Padiddling is much easier if the tablet or smartphone is attached to a larger object, such as something about the size and weight of a luncheon tray. Embedding sensing devices into a spinnable affordance allows a “spinning plate”-style interface, as seen in Fig. 2 (Cohen et al. 2011). Anyway, even with such deployment, padiddling skills are somewhat difficult to acquire. To broaden the enjoyment of twirling interfaces, we also introduce a more accessible style of interaction, namely tethered whirling (Cohen et al. 2013), described following.

Fig. 2
figure 2

Double-headed (“two-faced”) padiddle configuration, allowing viewing both above and below eye level. (a)bove eye level: inverted. (b)elow eye level: upright

Poi whirling

Poi, originally a Māori performance art featuring tethered weights, combines elements of dance and juggling (Infinite Skill of Poi 2010), and is a kind of “attached juggling.” It has been embraced by festival culture (especially rave-style electronic music events), including extension to “glow-stringing” and “fire twirling,” in which a glowstick (chemiluminescent plastic tube) or burning wick is whirled at the end of a cord. For Twhirleds poi affordance preparation, as seen in Fig. 3, a tether is threaded through a smartphone lanyard or bumper and attached to a kite string winder with a spindle, around which the phone can freely revolve. A user adopts a “Statue of Liberty” pose and simply whirls the device with a lasso gesture.

Fig. 3
figure 3

Whirling Poi

Anyone can twirl a tethered weight, and control regarding speed is easier than with padiddle-style spinning. Such whirling also more easily accommodates flexible handedness and spinning direction (chirality), the significance of which is explained below. We deployed a posable character action figure known from anime, Haruhi Suzumiya, for a poi-twirling animation: her figurine was suspended upside-down, resting a weight, which tether she grasped, on a rotating motorized turntable. Reinverting captured video restored normal orientation, as seen in Fig. 3 b, which cyclic stream can be scrubbed.5

Virtual perspectives: mirrored vs. tethered puppets

A major display modality of the Twhirleds system is mixed virtuality environments, real world-modulated, computer-synthesized scenes populated by avatars as representatives of human players, and a central concern of the experience is flexible virtual perspective. As presented in Table 1, virtual perspective can be classified according to proximal, medial, and distal categories (Cohen and Villegas 2016), elaborated following.

Table 1 Degrees of immersion and virtual perspectives in avatar-populated immersive environments

First-person: endocentric

1st-person perspectives are purely immersive, featuring a point-of-view imitating that from inside a designated avatar’s head. Such intimacy can be described as endocentric, centered within a subject, and is sometimes imprecisely refered to as a “PoV” (point-of-view) perspective.

Second-person: egocentric (idiothetic)— mirrored and tethered

Relaxing the tight coupling of a 1st-person PoV, 2nd-person perspectives allow displaced experience. Such perspectives can be described as egocentric, centered on (but not necessarily within) a subject, a kind of metaphorical leashing. Like a 1st-person perspective, a 2nd-person view is ‘self-centered’ (a.k.a. idiothetic), but the sensation is more explicitly an out-of-body experience. Two main styles are mirrored mode, in which a human user views a self-identified avatar frontally, as if looking in a mirror, and tethered mode, in which a virtual camera trails behind (and usually a little above) one’s avatar. Despite the informal description, first-person shooter (“FPS”) games often feature such shifted perspective, so that each player views their avatar from behind-the-back or over-the-shoulder.

Third-person: exocentric (allothetic or allocentric)

A 3rd-person, distal perspective can be described as exocentric (‘centered without,’ a.k.a. allothetic or allocentric), logically separate from any user, as it is oblivious or indifferent to the position of any particular character. Unlike 1st- and 2nd-person perspectives, 3rd-person points-of-view are egalitarian and non-individualized, decoupled from particular avatars (Cohen 1998).

Inspection gestures: “spin-around”

In image-based rendering, two styles of 360° viewing predominate, outward-looking “panos” and inward-looking “turnos” (a.k.a. object movies), as outlined by Table 2. The former emulates the view which would be captured by a radial, outward-facing camera; the latter turns perspective “inside-out” and is related to a “spin-around” gesture. By synchronizing rotation about the camera axis with revolution around an object of regard, like the tidally locked moon presenting the same face while orbiting the Earth, a phase-locked virtual camera expresses such an “inspection gesture,” as seen in Fig. 4 (Cohen et al. 2007). In the following sections, “camerabatics” and fluid perspective are shown to enhance appreciation and expression of padiddle and poi flow arts.

Fig. 4
figure 4

Inspection gesture: Phase-locked rotation and revolution of subject for orbit around an object of regard. (Cameras arranged around the token rosette are labeled with the direction they are pointing.)

Table 2 Panoramic and turnoramic perspectives

Related toys, devices, apps, & systems

Generations of human–computer interfaces roughly correspond to the number of modeled spatial dimensions. Chronologically and logically following 1st-generation, 1D textual “command line” interfaces (“CLIs”) and 2nd-generation, 2D GUIs (graphical user interfaces), dynamically interactive gesture interpretation represents the 3rd-generation stage of evolution of man–machine paradigms, 3D SUIs (spatial user interfaces) and KUIs (kinetic user interfaces). There are many gesture interpretation systems, featuring various kinds of mechanical, inertial, optical, and magnetic sensing. Exhaustive review is beyond the scope of this article, but some contemporary relevant interfaces can be surveyed, including many that are commercially available.

Mo-Cap General-purpose motion-capture systems often use optical tracking. For instance, the Vicon system6 is representative of optical sensing systems, typically featuring ceiling-mounted IR (infrared) cameras ringed with LEDs, tracking retroreflective markers arranged on actors (animate or inanimate) to infer motion. Other systems, such as Organic Motion’s Open Stage,7 require no markers. The Leap Motion8 controller is a USB desktop peripheral with IR LEDs and cameras, using software to estimate hand and finger postures and gestures. The HP Sprout computer9 uses downward-facing projectors and image sensors, through which users can interact with physical and digital content.

Set-top The Microsoft Kinect for Xbox game consoles10 uses visible and IR range cameras for full-body mo-cap. Other “seventh generation” television consoles use a different strategy, featuring hand-held controllers to track arm motion. The Nintendo Wiimote11 is tracked by a combination of built-in accelerometers and IR detection to sense position in 3-space. The Sony PlayStation Move12 features wand controllers with glowing, LED-illuminated orbs, which active markers can be tracked by visible light cameras. The Oblong Mezzanine spatial operating system13 uses handheld wands and ultrasonic emitters to recognize and interpret gestures, like those portrayed by the movie “Minority Report.” Twhirleds is distinguished from these by virtue of its intrinsic sensing, onboard tracking of spinning (padiddle) and whirling (poi). In that sense, it can be compared to the Zoom ARQ Aero RhythmTrak,14 which features an embedded accelerometer for controlling music effects.

Mobile Smartphones are popular platforms for sensing applications. The Samsung Galaxy features “Air Gesture” control interfaces,15 which interpret waving hand gestures, using a front camera and proximity sensor. Billinghurst et al. used smartphones to acquire hand gestures captured by a mobile camera (Billinghurst et al. 2014), overlaying mixed reality graphics stabilized by internal position sensors. This interface is somewhat different from our system, which uses smartphone position data directly, but which effects are displayed across the network. For steering players or playthings around courses, targets, and obstacles, mobile applications often leverage gyroscopic interpretation but not magnetometer-sensed absolute direction.

Representative instances include Labyrinth,16 a skeuomorphic recreation of the classic marble maze game; Crazy Snowboard,17 in which players hit jumps to “get air;” Frisbee™ Forever,18 in which thrown virtual flying discs are not ballistic (affected only by gravity and wind), but influenced by player control as they fly; and Frax HD,19 in which the user steers through fractal formscapes. Indeed, the iOS interface itself features a parallax effect that subtly shifts the foreground according to device inclination to simulate metaphorical depth, allowing icons to float above the background wallpaper. Such “tilt and roll” control can also modulate sound effects in iOS GarageBand.20

V R & AR : Position monitoring is used for head-tracking by Google Cardboard21 and HMDs, as well as augmented reality applications such as Anatomy 4D,22 which renders a 3D perspective above a flat fiducial marker, and GoSkyWatch,23 which aligns astronomical guides with celestial attractions. Orientation-sensing apps24 naturally use the magnetometer, as does the Papa Sangre II25 audio game, which uses device orientation to explore a spatial soundscape. Of course navigation apps such as Apple Maps26 and Google Maps27 use the magnetometer to calibrate compass direction.28 Periscope29 and Facebook Live30 use rotation tracking for navigating around photospherical video streams. These apps differ from our system by being designed for responsiveness to slower (“static”) event streams than those dynamically generated by padiddle and poi. Because it was designed to support flow arts, the Twhirleds system can handle rapidly spun or whirling affordances, which express faster motion than ordinary gestures.

Other: Some other amusement-related developments don’t fit into these crude categories, but are somewhat related to the Twhirleds system. “Twister”31 uses the asymmetry and embedded vibrator of a smartphone to cause it to rotate when balanced on its base, enabling automatic capture of panoramic imagery. The “Fyuse”32 spatial photography app uses smartphone orientation to scrub through sequences of orbiting inspection frames. Deploying sensors for sports is also increasingly popular, a recent and representative instance of which computer-enhanced play uses digital motion analysis to characterize freestyle snowboarding tricks (Groh et al. 2016). The “Centriphone,”33 a tethered inward-facing selfie camera, is especially effective for action sports such as skiing. These are all rather different from the Twhirleds applications, though, since they don’t distribute smartphone position data.

Method

To display azimuthal modulations, virtual objects in mixed virtuality fantasy scenes were rigged to be driven by twirled devices, as explained below.

Architecture

Direct Twhirleds manipulation gives “closed-loop,” multimodal feedback, both static pointing and dynamic spinning or whirling. By simply pointing the affordance, anything can be oriented or steered. Pointing is most appropriate for endocentric operations, such as moving a virtual camera, since a too quickly changing auditory or visual perspective is unstable and unsatisfying. Twirling is appropriate for ego- or exocentric operations, such as prop flailing, since such rapid changes can be best apprehended by a stable perspective.

Broad configurability of the Twhirleds clients allows flexible deployment. Variable transmission gain can scale control:display ratio, gearing down rotation to allow fine control and allowing fast twirling to be shared as more leisurely turning, or even overdriven to exaggerate such torque. Network transmission may be one-shot (pulsed) or continuous, including thresholded filtering for choked bandwidth, azimuthal (rotation) and/or circumferential (revolution) events, invertible polarity, and wrapped (folding over at 360°) or unwrapped yaw. Device vertical orientation may be upright or inverted (as was seen in Fig. 2), “trim tabs” are used for calibration, and a modal timer can disable control while spinning on the face of a tablet, preventing inadvertent change of settings.

Data distribution: client–server architecture

As shown in Fig. 5, to enable easy integration with various multimodal duplex interfaces, we use our own Collaborative Virtual Environment (CVE) to synchronize distributed clients34 (Kanno et al. 2001). The CVE is a lightweight, subscription-based, client–server protocol, multicasting (actually replicated unicasting) events such as azimuthal updates on shared channels. Mobile interfaces were developed for Google Android35 and Apple iOS36— the former communicating directly with the CVE server, the latter through TCP sockets.

Fig. 5
figure 5

Architecture: Our CVE provides a shared infrastructure, allowing heterogeneous, multimodal, cloudy clients to display data from multiple twirling affordances. The client-server architecture uses a “star” or “hub-and-spokes” topology to logically arrange multimodal control and display clients around a central CVE session server, sometimes with mediating “glue” middleware. Shared channels are subscribed to, and runtime client-generated events are multicast to peers at the edge of the network. For instance, a mobile client (bottom left) joins a session at launch-time, and then proceeds to stream orientation updates. Events are transmitted (left) into workstation-platformed event handlers. For iOS devices such as iPhones and iPads, events are converted by an iOS–CVE transcoder (left center) before being forwarded to a session server (center). The server redistributes events to channel-subscribed clients (periphery), which display them according to whatever local state (such as virtual camera position) and modality (auditory, visual, stereographic, etc.)

Multimodal display

A bonus of the design is that mobile devices can display visual output graphically, as seen in Fig. 6. By compensating for rotation or spinning motion, graphical display can be stabilized, as was seen in Fig. 2.

Fig. 6
figure 6

Running Twhirleds app, including typical settings. (This image is for the Apple iPad tablet, but the interfaces for the iPhone smartphone, iPad mini phablet, and Android devices are similar.)

Multimodal interoperability: other conformant clients

Since our twirling interfaces conform to the CVE protocol, control can be integrated with other clients, including those used for panoramic, turnoramic, and photospherical display (as distinguished in Table 3). Other conformant clients include stereographic displays, rotary motion platforms, speaker arrays and spatial sound diffusers, and musical renderers, described following.

Table 3 Surround displays

Spatial sound

Twhirleds control can be used to modulate spatial sound. Such steering can be egocentric or exocentric— controlling, that is, either the yaw of a sink (virtual listener), in which case an entire subjective soundscape rotates, or just the bearing of one of the sources (virtual sounds), in which case only that particular source is moved. We use conformant client “Multiplicity” (Fernando et al. 2006), shown symbolically at the top right of Fig. 5 and graphically in Fig. 7, to stereo-pan prerecorded audio files, flattening 2D circumferential space into 1D stereo lateralization. We also use “ S6” (Soundscape-Stabilized Swivel-Seat Spiral Spring) (Cohen and Sasa 2000), shown in Fig. 8, to monitor “unwrapped” azimuth and lateralize realtime audio input streams.

Fig. 7
figure 7

Stereo localization in a virtual concert can be performed with Multiplicity

Fig. 8
figure 8

Spiral Spring Display: For small numbers of rotations, a spiral spring graphical interface (including Chromastereoptic color modulation (Steenblik 1993) for virtual depth) is adequate, but for arbitrary bearing, it quickly runs out of coils

Besides such stereophonic display, a crossbar matrix mixer can sweep audio channels around a speaker array (Sasaki and Cohen 2004). Alternatively, a Pure Data37 (Matsumura 2012; Chikashi 2013) Vector Base Amplitude Panning “patch” (subroutine) can control amplifier gain modulation for intensity panning (Pulkki 1997; 2000; Pulkki et al. 2011). Such directionalization often takes the form of a 2:8 up-mix, diffusing both sides of a stereophonic pair into a circumferential display. Karaoke recordings are convenient for source material, since they comprise synchronized stereo recordings with isolated channels, allowing vocal tracks and orchestral accompaniment to be steered separately but auditioned together. We can also use our university’s 3D Theater, shown in Fig. 9, to pantophonically pan parallel inputs across speaker arrays using a networked mixer as a crossbar spatializer. This architecture scales up to arbitrary degrees of polyphony: multichannel songs, conference chatspaces, and immersive soundscapes can be dynamically diffused via such controllers.

Fig. 9
figure 9

The University of Aizu 3D Theater display systems, featuring ceiling-mounted speakers for sound diffusion and multiple projectors for stereoscopic display

Turnoramic, panoramic, and photospherical imagery

As seen on the right of Fig. 5, photospherical browsers can be used to rotate adjustable fields-of-view from omnidirectional imagery (Cohen 2012) (as in Fig. 10), including videos. These techniques are multimodally integrated in the “\({}^{\textrm {S}}_{\textrm {c}}{\textrm {ha}}_{\textrm {i}}{\textrm { r}}^{\textrm {e}}\)” (for ‘share chair’) rotary motion platform (Cohen 2003), shown in Fig. 11.38 Steerable by local or networked control, including Twhirleds,39 a laptop computer on its desk can display panoramic imagery whilst nearphones (near earphones) straddling the headrest display directionalized audio, the soundscape stabilized as the pivot chair rotates.

Fig. 10
figure 10

PoI Poi: Point-of-interest poi panning projected panoramas, whirling-controlled situated panorama as locative media. The view emulates that which would be captured by a radial (outward facing) camera swung on its cable. The perspective can also be turned inside-out, modulating the orientation of an object movie, a turno, emulating an inspection or spin-around gesture

Fig. 11
figure 11

The “\({}^{\textrm {S}}_{\textrm {c}}{\textrm {ha}}_{\textrm {i}}{\textrm {r}}^{\textrm {e}}\)” rotary motion platform is a swivel chair. A servomotor in the base spins the chair under dynamic control. Nearphones in the headrest display panned, stabilizable soundscapes

Mixed virtuality scenes

Twirling can also modulate positions of avatars and objects in virtual environments and game engines such as Alice40 (Dann et al. 2010; Olsen 2011; Cohen 2016),

Open Wonderland41 (Kaplan and Yankelovich 2011), and Unity42 (Kaji and Cohen 2016; Kojima and Cohen 2016), as diagrammed on the bottom right of Fig. 5. Various scenes have been crafted, extended for augmented virtuality rigging to accept data from distal sources, and integrated as networked clients. Logical 3D layers allow alternating among various setting/avatar/prop combinations (as outlined by Table 4 and shown in Figs. 12 and 13). Scenes are enlivened with various techniques: some attributes are continuously animated; some attributes are automatically controlled (such as spotlights which follow whirling poi devices); and multiple twirling controllers can animate different aspects of a scene, including avatars, props, scenery, and camera viewpoint. In practice, usually one or two players whirl poi-like toys while an attendant controls scene selection, spin-around camera angle, and secondary scene attributes.

Fig. 12
figure 12

Augmented virtuality twirling fantasy scenes, rigged for mobile affordance modulation (first generations of respective scenes)

Fig. 13
figure 13

Augmented virtuality twirling fantasy scenes, rigged for mobile affordance modulation (second generations of respective scenes)

Table 4 Augmented virtuality scenes, rigged for twirling affordance adaptive projection

Automatic visual and logical alignment via “reality distortion field”

As seen below in Fig. 14, while spinning a padiddle-style flat object or whirling a poi-style weight, a player monitors virtual projection in a graphic display with a displaced, “2nd-person” perspective, able to see a self-identified puppet, including orientation of the twirled toy. We explore the potential of self-conscious avatars, not in the sense of self-aware artificial intelligence, but rather figurative (humanoid) projections that can not only display real-world data but also automatically accommodate virtual camera position to maintain visual and logical consistency for human users presumed to prefer visual alignment to veridical (reality-faithful) projection.

Fig. 14
figure 14

As the virtual camera orbits a puppet representing the user, the self-conscious, ambidextrous avatar switches hands. In this example, phase perturbation is disabled, so affordance projection is not aligned: World-referenced affordance azimuth is not adjusted, so human–avatar correspondence is up to half a cycle out of phase

Self-conscious avatar ambidexterity

Like television or movie actors who adjust their pose to complement a camera point-of-view, Twhirleds avatars are aware of virtual camera position and the projection mode of an active human pilot visually monitoring a scene. A unique feature of the rigging is that the avatars are strategically ambidextrous: although a human player typically uses a particular hand (usually the right) to twirl a toy, as the virtual viewpoint sweeps around between “tethered” and “mirror” perspectives (Figs. 15 ab or c and 16), the puppet dynamically switches virtual manipulating hand, even while the prop is spinning or whirling. This logical rather than physical mapping requires flexible rigging, an ambidextrous avatar for a unidextrous human: bilateral display of unilateral control.

Fig. 15
figure 15

Continuous spin-around gesture, without and with hand-switching: Ambidexterous display of unidexterous control by context-aware puppet

Fig. 16
figure 16

As the subjective camera orbits (via phase-locked rotation and revolution) in an inspection gesture between mirrored and tethered perspectives around an objective character animated by real-life motion capture, the puppet automatically switches virtual manipulating hand to preserve alignment and logical association

For example, a right-handed user would prefer to see their self-identified vactor holding an affordance in the right hand for dorsal (tethered) views, but would rather see the puppet switch hands for frontal (mirrored) perspectives. If the virtual camera were to cut discontinuously between frontal and dorsal views, the avatar could simply switch hands “offline” (during instantaneous transition), but the interface features seamless inspection gestures. As seen in Figs. 14, 15, and 16, correspondence is preserved, even as the camera orbits around an avatar.43 Note that azimuthal (horizontal) twirling is the only gyration for which such accommodation could work. For instance, a tennis- or ping-pong-playing game character can’t switch racket- or paddle-holding hand without reflecting the stroke.

Projected affordance phase perturbation

For spun affordance scenes, such as those that feature padiddled pizza or cake (Figs. 12(a) or 13(a)) or rotating halo (Figs. 12(b) and 13(b)), twirling is basically phase-oblivious; it is difficult or impossible to perceive absolute azimuth. But for whirled affordance scenes, such as those featuring poi (Figs. 12(c) and 13(c)) or jousting flails (Figs. 12(d) and 13(d)), phase is more conspicuous because of the radial tether, and mismatch between control and display — discrepancy between position of the affordance in the real world and that of its mixed virtuality projection — is more glaring, as seen in Fig. 14.

To resolve this apparent inconsistency, caused by one’s continuous association with an avatar, the phase of the affordance display can be progressively offset as the camera swings around, even while the toy is being twirled, so that the alignment is perspective-invariant. Consummating the visual adjustment initiated by the ambidextrous avatar, phase of a projected prop is perturbed by a “reality distortion field,” as seen in Fig. 17. By relaxing strict correspondence between real and virtual worlds, an avatar can both express position of a twirled affordance and align visually and logically with a monitoring user. That is, while a virtual camera revolves and rotates around a rigged avatar, spinning or whirling a prop in synchrony with a human-twirled affordance, the phase of the displayed toy’s image is adjusted to match the bearing of the phase-locked camera. Such perturbation “borrows” some phase from the twirled affordance projection, restoring or accumulating it as the camera moves back or continues on around in its orbit. The adjustment is also affected by any transmission scaling or offset injected by the mobile device: θ projected affordance=θ sampled affordance×gaintransmission+θ offset+φ virtual camera angle.

Fig. 17
figure 17

Phase perturbation of projected affordance: As the perspective changes (and the simulated projected prop is moved to opposite hand), phase of the projected affordance is adjusted to maintain visual alignment, and so remains pointed to the right in this artificial sequence

Such warped projection can be confusing, so to clarify the relation across the reality–virtuality mapping, the relative position of the virtual camera with respect to the avatar can be projected back into user space, using environmental lighting as a totem for the PoV, as described following.

Environmental lighting and exocentric perspective display virtual camera position

To clarify the virtual inspection gesture and the relationship between the real and virtual spaces, environmental lighting is deployed. Philips Hue44 wirelessly networked LED bulbs and original middleware represent and re-present the relative position of the virtual camera in back-projected user space (Cohen et al. 2014). Selection of one or two bulbs from a ring of four networked lights arranged around human players is used to stand for the orbiting virtual camera, as illustrated by Figs. 18 and 19 b.

Fig. 18
figure 18

This composite shows four frames from a wrapped cycle of an orbiting virtual camera (at 0°, 90°, 180°, and 270°). The user twirls a poi-like affordance (shown as a knob-terminated segment), which is adaptively projected into the scene. For simplicity, the illustration uses a static actor and affordance position, simple “right pointing,” but typically the manipulable is whirled, even as the virtual camera moves. Each quadrant (delimited by gray rectangles) associates a subject (red, lower) with a projection (blue, upper) through the graphical display (“dot–dash” segment). (“Real” things are shown in blue, “virtual” things in red, and “hybrid” things in purple.) The virtual camera sweeps around (larger, purple oval), changing the perspective on the self-identified avatar (smaller, blue oval). A subtlety lies in the phase adjustment of the affordance’s image: in order to shift continuously between frontal, mirrored (top center) and dorsal, tethered (bottom center) perspectives, displayed azimuth of the object must be perturbed. Although the human actor frozen here in the diagram strikes a static pose across the camera angles, affordance projection is artificially rotated to accommodate the shifting perspective. Environmental lighting surrounding the player, iconified by the “light+camera”s in the figure, represents positions of the virtual camera projected back into user space, the perspective standpoint in the “real” world

Fig. 19
figure 19

Juxtaposed alternate projections of poi twirler: Parameterizing avatar handedness and twirling prop phase by virtual camera position lessens cognitive load of mentally reflecting projection of a self-identified avatar or that of a spun or whirled affordance, even while a free-viewpoint virtual camera swings around. An affordance held in user’s right hand is instantaneously pointing to user’s right, but the camera-conscious mixed virtuality avatar shifts it to the contralateral (opposite) hand as projected affordance phase is perturbed to flatter frontal perspective. (“Order” in the subfigure captions refers to those mentioned by Table 1) a 0th-order (“real life”) scene, with poi-twirling user observing both 2nd- and 1st-order projections b 1st-order, 3rd-person scene, showing actual affordance position (translucent, red-capped), anticipated perturbed projection (opaque, green-capped), orbital virtual camera (translucent), unwrapped spin-around phase (dashed helical camera tail stream, 1 1/2 revolutions from initial dorsal position) connected to graphic monitor (translucently glazed), and simulation of environmental lighting (frontal light illuminated). c 2nd-order, 2nd-person mixed-virtuality fantasy scene with ambidextrous avatar and affordance projection perturbation. A right-handed user appreciates mirrored projection of a frontal perspective, since it visually aligns and therefore seems more natural than a faithful, chirality-preserving mapping

Although a toy might be twirled too fast for such lights to follow in the real world, so that only CG “eye candy” spotlights are practical (as was seen in Fig. 17), speed of orbiting of a virtual camera can be adjusted to accommodate even sluggish coordination. The roomware lighting system takes about a second to adjust distributed bulbs, but repositioning a virtual camera and invocation of tethered or mirrored “detent” perspective modes have the luxury of arbitrary timing. Even though a user might be quickly whirling an affordance, the virtual camera can track lumbering control, swinging perspective in the virtual world around in synchrony with light switching in the real world.

The “Lights, Camera, Action!” module45 (Tsukida et al. 2015) (seen in the top left of the cloud in Fig. 5) controls the lighting system, which lights are arranged in a ring around human players in “meatspace.” Each light signifies the location of the virtual camera. That is, for a frontal view, the light in front of the user is lit, and for a dorsal view, the light behind. Side lights interpolate between mirrored and tethered perspectives, and show which direction the virtual camera moves. The cyclic illumination pattern follows not a simple 1-of-n demultiplexing, but a modified Gray Code sequence, like a 4-coil stepper motor that interpolates between its poles.

The motivating idea is that ambient cues can clarify interpretation of mixed virtuality scenes (Ghosh et al. 2005), especially since the fluid perspective, orbiting inspection camera, and reality distortion field — ambidextrous puppets and phase perturbation — complicate the relationship between physical and virtual spaces. Similarly, when browsing panoramic imagery, ambient lighting can correspond to the direction of scene features, moving around the user as an image is frontally panned, reinforcing directional awareness, as shown in Fig. 20.

Fig. 20
figure 20

Besides indicating position of a virtual camera in user space, ambient lighting can also match orientation of an on- or off-screen panoramic scene feature, such as the sun

Since the distortion and environmental lighting can be difficult to understand, we developed an additional, 3rd-person, exocentric rendering, besides the original, 2nd-person, egocentric perspective. As seen in Fig. 19, the exocentric interface illustrates the mediation of the rigging and the projection back into user space, including ambient lighting (Cohen and Oyama 2015).46

Audio and music

Besides controlling spatial sound, Twhirleds can also sequence music,47 resembling at least superficially two acoustic whirling sound-makers:

“semitaro” or “minminzemi” cricket: a short cylinder, often decorated to resemble an insect (“semi” means cricket in Japanese), covered on paper at one end, through which center a string is attached which is spun by a resin-coated spindle, yielding a sawtooth wave-like slip/stick vibration buzzing (like a violin or other bowed instrument, and

bull-roarer: a weighted aerofoil swung on a long cord, producing a characteristic vibrato (pitch modulation ).

Twirling can play a song, as if operating an score-following “orgel” music box (as was seen in Fig. 15). A typical whirling rate of a 150 g device swung on a 1 m tether is almost 2 Hz, happily coinciding with a typical musical tempo of 120 bpm, so sequenced songs are naturally paced at one beat/revolution (or multiples thereof), but can enjoy rubato (tempo variation).

The sequencing algorithm assumes that whirling is sampled at least four times per revolution. The process listens for event updates, infers the “ticks,” and synthesizes sequenced notes at each quadrant. This basic algorithm breaks the revolution associated with a crochet (quarter note) into quarters, so can parse rhythms down to semiquaver (sixteenth notes) resolution. The sequence trigger is also activated by directional zero-crossings, so a song can be played just by waving a handheld smartphone,48 which can confirm such ticks with locally displayed LED flashes and vibration.

Results and discussion

A simple experiment was conducted to measure determinism of reported azimuths, with idealized repeatable conditions meant to approximate typical use scenarios. A motorized turntable, spinning at constant rate (20°/s = 31/3 RPM), rotated a smartphone running Twhirleds, sending streams of orientation to a session server. Time-stamped data logging was enabled on the server. Two network conditions were Wi-Fi, though a local access point, and cellular transmission, through a commercial carrier. Data was collected for a minute for each trial, and results of two typical trials are juxtaposed in Fig. 21.

Fig. 21
figure 21

Comparison of delivered azimuth between Wi-Fi (blue) and 4G LTE (golden) networks

With a perfect system, a smooth straight line with evenly spaced points would be expected. In actuality, nondeterministic strobing on the mobile client and network jitter degrade such consistent performance. Correlations between the time-stamps and received azimuths were very close to unity for both Wi-Fi and 4G network conditions (0.999978 and 0.999959, respectively), indicating accuracy of the somewhat irregularly spaced measurements. (Repeated trials yielded similar data, so these results can be considered representative, and no averaging is reported here.) User experience and performance metrics of a related system extended from Twhirleds are reported in a separate article (Ranaweera and Cohen 2016).

End-to-end performance of the system is not yet totally satisfactory. There is sometimes significant lag, up to as much as eight seconds when the cellular network is used. More work is needed to characterize and ameliorate such latency. We are exploring alternate communication systems that would allow more synchronous interaction. However, especially when using Wi-Fi, response is almost good enough for interactive control, particularly with Android devices which communicate directly with the server (unlike iOS devices which currently require transcoding intermediation). For the particular applications described here, such issues are not “show stoppers,” since the twirled devices are moving too fast for discrepancies to be noticeable. Demonstration of such realtime capability can be observed in a video.49 Such performance can also be tuned by setting thresholds on the affordance, effectively choking the data source. In the future, event coalescence or aggregation at the server might also be possible, as a “look-ahead” feature can skip position update events that would be immediately superseded.

Conclusions

Tracking devices, motion-capture, gestural sensing, and the like can be used to nudge computer interfaces towards interaction styles described variously as ubicomp (Mark Weiser), calm computing (Weiser and Brown; 1997), ambient media/intelligence/interaction, embedded or pervasive computing, organic (Rekimoto 2008), post-WIMP (van Dam 2005), reality-based, tangible (van den Hoven et al. 2013) (Ishii 1998; Ishii and Ullmer 1997), and tacit interaction (Pedersen et al. 2000). Natural interfaces, the “disappearing computer” (Streitz et al. 2007) without mouse or keyboard (Cooperstock et al. 1997), are more convenient and intuitive than traditional devices and enable new computing environments.

The actual Twhirleds experience is currently more interesting than entertaining, more academic than fun. Readers are encouraged to download collateral software,50 install the app, and try it themselves. In the future, we hope make it more recreational, exploring the interplay of interaction and display (both visual and auditory) of mutual phase, using extended groupware deployment, including collaborative musical performance. Besides whatever graphical, auditory, and vibratory output, separately displayed or embedded in twirling affordances themselves, we continue to explore embodied interaction, the expressiveness and potential of the happy alignment of gravity-oriented horizontal twirling gestures, horizontal inspection gestures, the geomagnetic field, and horizontally favored auditory directionalization acuity. In particular, the Twhirleds application leverages the alignment of the horizontal orientation of a normal pano or turno and its corresponding inspection or spin-around gesture, the horizontal orientation of compass-derived yaw data, and the symmetrically bilateral anatomy of humans and figurative avatars.

The difference between dynamic twirling modes and static pointing modes, even using a rotary controller (such as the Griffin Powermate51), is momentum, like the difference between bicycling and jogging. When a runner tires, she slows or stops, but a bicyclist can cruise or coast on flat road with minimal effort. Similarly, one can spin an object with less effort than that required to turn it round-and-round, since flywheel or intertial rotation is kind of a default motion, requiring sustaining energy but only unfocused user attention. The considered Twhirleds use-cases (paididdle and poi) are at once both demanding and forgiving: rapid spinning gestures expect perky throughput, but their expression can tolerate phase delay.

Probably an alternate interface such as TouchOSC52 could yield comparable or perhaps improved performance. The main results of this project, though, are qualitative, not quantitative, and the most interesting aspects involve flexible connection to multimodal displays. Amplification of multimedia juggling can also be extended to leverage other kinds of sensing, and there are many ways to instrument one’s body or manipulative affordances. For instance, a microphone could be used to detect the whoosh of whirling. Other sensors such as those surveyed in the introduction— including gyroscope, accelerometer, GPS, barometer, and camera (for capturing optical flow, as in the newly announced Google VPS [visual positioning system or service])— could be used as well, integrable through sensor fusion. Ubicomp-style extrinsic (external) tracking can augment intrinsic (internal) metering; the “quantified self” can be measured by “other”!

Figure 22, adapted and extended (Cohen 2016) from (Brull et al. 2008), illustrates three dimensions characterizing mixed reality systems. The “Synthesis” axis is the original real–virtual continuum (Milgram and Coquhoun Jr 1999; Milgram and Kishino 1994); “Location” refers to where such systems are used; and “Diffusion” refers to degree of concurrent usage. Twhirleds can control photorealistic imagery and also mixed reality/virtuality scenes. It reconciles the stationary vs. mobile distinction by being both location-based and personally mobile, and is multiuser. Such systems are sometimes called “SoLoMo”: social, local, mobile. The dichotomy between mobile and LBS (location-based services) is resolved with “mobile-ambient” transmedial interfaces that exploit both individual mobile devices and shared locative public media.

Fig. 22
figure 22

Mixed reality taxonomy: Synthesis (red) ×Location (blue) ×Diffusion (green)

Twhirleds’ mobile affordances projected onto social screens with back-projection of virtual camera position instantiate integration of telemetric personal control and public display (Memarovic 2015). Mixed reality and virtuality environments, using fluid perspective to blur the distinction between sampled and synthesized data, are literally illuminated by networked lighting.

As recapitulated in Fig. 23, the Twhirleds “exertoy” represents a physical interface for whole body interaction, a “practically panoramic” (Cohen and Győrbiró 2008), mobile-ambient, multimodal groupware interface intended for location-based entertainment (Kostakos and Ojala 2013; Silva and El-Saddik 2013; Sra and Schmandt 2015). It promotes social interaction through physical play (Bekker et al. 2010): a “come as you are” mo-cap-style interface, requiring no special markers, clothing, or external sensors.

Fig. 23
figure 23

Twhirleds multimodal control and display

Endnotes

1 http://sonic.u-aizu.ac.jp/spatial-media/mixedreality/Videos/SMS-CVE.m4v

2 https://www.youtube.com/watch?v=XF2pGPDrr7s

3 https://www.youtube.com/watch?v=Fpgj6nNb6ns

4 http://freestyle-frisbee.com

5 This rigged animation can be seen both in the center of the application mosaic towards the end (140– 154) of the “ ‘CVE’: Collaborative Virtual Environment” video: https://www.youtube.com/watch?v=iJreaIXZSI8&t=1m40s and also as a loop at http://sonic.u-aizu.ac.jp/spatial-media/Videos/HaruhiSuzumiyaPoi.gif.

6 http://www.vicon.com

7 http://www.organicmotion.com/mocap-for-animation

8 https://www.leapmotion.com

9 http://www8.hp.com/us/en/sprout/home.html

10 http://www.xbox.com/en-US/xbox-one/accessories/kinect

11 http://wii.com

12 https://www.playstation.com/en-us/explore/accessories/vr-accessories/playstation-move/

13 http://www.oblong.com/mezzanine

14 https://zoom-na.com/products/production-recording/digital-instruments/arq-aero-rhythmtrak

15 https://play.google.com/store/apps/details?id=in.tank.corp.proximity

16 https://itunes.apple.com/app/labyrinth/id284571899

17 https://itunes.apple.com/app/crazy-snowboard/id294919686

18 https://itunes.apple.com/app/frisbee-forever/id431855391

19 https://itunes.apple.com/app/frax-hd-chumenorairutaimuno/id529646208

20 https://itunes.apple.com/app/garageband/id408709785

21 https://vr.google.com/cardboard

22 https://itunes.apple.com/app/anatomy-4d/id555741707

23 https://itunes.apple.com/app/goskywatch-planetarium-for/id364209241

24 Clinometer HD: https://play.google.com/store/apps/details?id=com.plaincode.clinometer, https://itunes.apple.com/app/clinometer-+-bubble-level/id286215117; Field Compass Plus: https://play.google.com/store/apps/details?id=com.chartcross.fieldcompassplus, Magnetmeter: https://play.google.com/store/apps/details?id=com.plaincode.magnetmeter, https://itunes.apple.com/app/id346516607; Magneto-Vision: https://itunes.apple.com/app/magneto-vision/id500207853; Orient: https://itunes.apple.com/app/orient/id362987455; Pro Compass: https://itunes.apple.com/app/pro-compass/id517739197; Spyglass: https://itunes.apple.com/app/spyglass/id332639548

25 https://itunes.apple.com/app/papa-sangre-ii/id710535349

26 https://www.apple.com/ios/maps

27 https://play.google.com/store/apps/details?id=com.google.android.apps.maps, https://itunes.apple.com/app/google-maps-real-time-navigation/id585027354

28 GPS Status & Toolbox: https://play.google.com/store/apps/details?id=com.eclipsim.gpsstatus2

29 https://itunes.apple.com/app/periscope/id972909677, https://play.google.com/store/apps/details?id=tv.periscope.android

30 https://live.fb.com/

31 https://itunes.apple.com/app/twister-best-photo-video-360/id668495813

32 https://play.google.com/store/apps/details?id=com.fyusion.fyuse, https://itunes.apple.com/app/fyuse-3d-photos/id862863329

33 http://www.centriphone.me

34 “ ‘CVE’: Collaborative Virtual Environment”: https://www.youtube.com/watch?v=iJreaIXZSI8

35 Twhirleds for Android in Google Play: https://play.google.com/store/apps/details?id=jp.ac.u_aizu.Twhirleds

36 Twhirleds for iOS in Apple iTunes App Store: https://itunes.apple.com/app/twhirleds/id962674836

37 http://puredata.info

38 “ ‘Schaire’ Rotary Motion Platform”: https://www.youtube.com/watch?v=l6M8pr7wQL4

39 “Whirled Worlds: Pointing and Spinning Smartphones and Tablets to Control Multimodal Augmented Reality Displays”: http://sonic.u-aizu.ac.jp/spatial-media/mixedreality/Videos/Whirled_Worlds.mov, where \({}^{\textrm {S}}_{\textrm {c}}{\textrm {ha}}_{\textrm {i}}{\textrm {r}}^{\textrm {e}}\) control by Twhirleds can be observed 215– 247.

40 http://www.alice.org

41 http://www.openwonderland.org

42 https://unity3d.com

43 “Avatar Ambidexterity Allows Affordance Attitude Alignment”: http://sonic.u-aizu.ac.jp/spatial-media/mixedreality/Videos/Tworlds4.mp4

44 http://www2.meethue.com

45 “ ‘Lights, Camera, Action!’: Ambient lighting extending photospherical display”: https://www.youtube.com/watch?v=Y7uIvOCgxpE

46 “Exocentric Rendering of ‘Reality Distortion’ User Interface Illustrating Egocentric Reprojection”: https://www.youtube.com/watch?v=lC7cNSB1ZWE

47 “Music Player Demonstration”: https://www.youtube.com/watch?v=3PLuqGWMOOQ

48 as was seen (1’14"–1’42") in aforementioned “ ‘Twhirleds’ for iOS and Android” video: https://www.youtube.com/watch?v=XF2pGPDrr7s&t=1m14s

49 “Motorized turn-table for automatic panning capture”: https://www.youtube.com/watch?v=dU-zZoIIngk

50 http://arts.u-aizu.ac.jp/spatial-media/Twhirleds

51 https://griffintechnology.com/us/powermate, as seen (105– 114) in the aforementioned “ ‘CVE’: Collaborative Virtual Environment” video: https://www.youtube.com/watch?v=iJreaIXZSI8&t=1m5s

52 http://hexler.net/software/touchosc

References

  • Béïque, V, Dragone F. Cirque du Soleil: Nouvelle Experience. 2001. Sony Pictures. DVD video.

  • Bekker, T, Sturm J, Barakova E. PUC: Personal and Ubiquitious Computing. 2010; 14(5):381–3. doi:10.1007/s00779-009-0269-9. Design for Social Interaction through Physical Play

  • Brull, W, Lindt I, Herbst I, Ohlenburg J, Braun AK, Wetzel R. Towards Next-Gen Mobile AR Games. Comput Graphics Animation. 2008; 28(4):40–8. doi:10.1109/MCG.2008.85.

    Article  Google Scholar 

  • Billinghurst, M, Piumsomboon T, Bai H. Hands in space: Gesture interaction with augmented-reality interfaces. IEEE Comp. Graphics & App. 2014; 34(1):77–80. doi:10.1109/MCG.2014.8.

    Article  Google Scholar 

  • Chikashi, M. Pure Data Tutorial and Reference (in Japanese). Tokyo: Works Corporation; 2013.

    Google Scholar 

  • Cohen, M. Integration of laptop sudden motion sensor as accelerometric control for virtual environments In: Rahardja, S, Wu E, Thalmann D, Huang Z, editors. VRCAI: Proc. ACM Int. Conf. on Virtual-Reality Continuum and Its Applications in Industry. Singapore: 2008. doi:10.1145/1477862.1477911.

  • Cohen, M, Ranaweera R, Ito H, Endo S, Holesch S, Villegas J. Whirling interfaces: Smartphones & tablets as spinnable affordances. In: ICAT: Proc. Int. Conf. on Artificial Reality and Telexistence. Osaka: 2011. p. 155. doi:10.13140/RG.2.1.3504.0726.

  • Cohen, M, Ranaweera R, Nishimura K, Sasamoto Y, Endo S, Oyama T, Ohashi T, Nishikawa Y, Kanno R, Nakada A, Villegas J, Chen YP, Holesch S, Yamadera J, Ito H, Saito Y, Sasaki A. “Tworlds”: Twirled Worlds for Multimodal ‘Padiddle’ Spinning & Tethered ‘Poi’ Whirling. In: SIGGRAPH. Anaheim: 2013. doi:10.1145/2503385.2503459. http://www.youtube.com/watch?v=sKruOxXJBNU.

  • Cohen, M, Villegas J. Applications of audio augmented reality: Wearware, everyware, anyware, & awareware In: Barfield, W, editor. Fundamentals of Wearable Computers and Augmented Reality. 2nd ed. Mahwah: CRC Press: Lawrence Erlbaum Associates: 2016. p. 309–30. doi:10.1201/b18703-17. ISBN 978-750-4822-4350-5, 978-1-4822-4351-2

    Google Scholar 

  • Cohen, M. Quantity of presence: Beyond person, number, and pronouns In: Kunii, TL, Luciani A, editors. Cyberworlds. Chap. 19. Tokyo: Springer: 1998. p. 289–308. doi:10.1007/978-4-431-67941-7_19.

    Google Scholar 

  • Cohen, M, Jayasingha I, Villegas J. Spin-around: Phase-locked synchronized rotation and revolution in a multistandpoint panoramic browser In: Miyazaki, T, Paik I, Wei D, editors. Proc. CIT: 7th Int. Conf. on Computer and Information Technology. Aizu-Wakamatsu: 2007. p. 511–6. doi:10.1109/CIT.2007.141.

  • Cohen, M, Sasa K. An interface for a soundscape-stabilized spiral-spring swivel-seat In: Kuwano, S, Kato T, editors. Proc. WESTPRAC VII: 7Th Western Pacific Regional Acoustics Conf. Kumamoto: 2000. p. 321–4.

  • Cohen, M. Poi Poi: Point-of-Interest Poi for Multimodal Tethered Whirling. In: MobileHCI: Proc. 14th Int. Conf. on Human-Computer Interaction with Mobile Devices and Services. San Francisco: 2012. p. 199–202. doi:10.1145/2371664.2371709.

  • Cohen, M. The Internet Chair. IJHCI: Int J Human-Comput Interact. 2003; 15(2):297–311. doi:10.1207/S15327590IJHC1502_7.

    Google Scholar 

  • Cohen, M. Demo: Smartphone Rigging with GUI Control Emulation for Freeware Rapid Prototyping of Mixed Virtuality Scenes. In: SIGGRAPH Asia Symp. on Mobile Graphics and Interactive Applications. Macao: 2016. doi:10.1145/2999508.2999511.

  • Cohen, M, Ranaweera R, Ryskeldiev B, Oyama T, Hashimoto A, Tsukida N, Toshimune M. Multimodal mobile-ambient transmedial twirling with environmental lighting to complement fluid perspective with phase-perturbed affordance projection. In: SIGGRAPH Asia Symp. on Mobile Graphics and Interactive Applications. Shenzhen: 2014. doi:10.1145/2669062.2669080.

  • Cohen, M, Oyama T. Exocentric Rendering of “Reality Distortion” User Interface to Illustrate Egocentric Reprojection. In: Proc. SUI: ACM Symp. on Spatial User Interaction. Los Angeles: 2015. p. 130. doi:10.1145/2788940.2794357. Poster demonstration. Poster demonstration

  • Cooperstock, JR, Fels SS, Buxton W, Smith KC. Reactive environments: Throwing away your keyboard and mouse. Commun Acm. 1997; 40(9):65–73. doi:10.1145/260750.260774.

    Article  Google Scholar 

  • Cohen, M. Dimensions of spatial sound and interface styles of audio augmented reality: Whereware, wearware, & everyware In: Barfield, W, editor. Fundamentals of Wearable Computers and Augmented Reality. Chap. 12. 2nd ed. Mahwah: CRC Press: Lawrence Erlbaum Associates: 2016. p. 277–308. doi:10.1201/b18703-16. ISBN 978-750-4822-4350-5, 978-1-4822-4351-2

    Google Scholar 

  • Cohen, M, Győrbiró N. Personal and portable, plus practically panoramic: Mobile and ambient display and control of virtual worlds. Innov Mag Singap Mag Res Technol Educ. 2008; 8(3):33–5.

    Google Scholar 

  • Dann, WP, Cooper SP, Ericson B. Exploring Wonderland: Java Programming Using Alice and Media Computation. Upper Saddle River: Pearson; 2010.

    Google Scholar 

  • Fernando, ONN, Adachi K, Duminduwardena U, Kawaguchi M, Cohen M. Audio Narrowcasting and Privacy for Multipresent Avatars on Workstations and Mobile Phones. Ieice Trans Inf Syst. 2006; E89-D(1):73–87. doi:10.1093/ietisy/e89-d.1.73.http://search.ieice.org/bin/summary.php?id=e89-d_1_73&category=D&year=2006&lang=E. http://i-scover.ieice.org/iscover/page/ARTICLE_TRAN_E89-D_1_73.

    Article  Google Scholar 

  • Ghosh, A, Trentacoste M, Seetzen H, Heidrich W. Real illumination from virtual environments. In: Proc. Eurographics Symp. on Rendering. Konstanz: 2005. p. 243–52. doi:10.1145/1187112.1187161.

  • Groh, B, Fleckenstein M, Eskofier B. Wearable Trick Classification in Freestyle Snowboarding. In: IEEE EMBS 13th Annual Int. Body Sensor Networks Conf. San Francisco: 2016. p. 89–93. doi:10.1109/BSN.2016.7516238. https://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2016/Groh16-WTC.pdf.

  • He, Z. ERA- Intersection of Time; The Journey of Chinese Acrobats. Shanghai: Shanghai People’s Publishing House; 2009. Translated by Haiming Liu and Hui Ma.

    Google Scholar 

  • Infinite Skill of Poi. DVD video. 2010. http://www.naranja.co.jp/juggling/web-pages/4053.

  • Ishii, H. Tangible bits. IPSJ Magazine. 1998; 39(8):745–51. (In Japanese).

    Google Scholar 

  • Ishii, H, Ullmer B. Tangible bits: Towards seamless interfaces between people, bits and atoms. In: Proc. CHI: Conf. on Computer-Human Interaction. Atlanta: 1997. p. 234–41. doi:10.1145/258549.258715.

  • Kanno, T, Cohen M, Nagashima Y, Hoshino T. Mobile control of multimodal groupware in a distributed virtual environment In: Tachi, S, Hirose M, Nakatsu R, Takemura H, editors. Proc. ICAT: Int. Conf. on Artificial Reality and Telexistence. University of Tokyo: Tokyo: 2001. p. 147–54.

    Google Scholar 

  • Kaplan, J, Yankelovich N. Open wonderland: an extensible virtual world architecture. IEEE Internet Comput. 2011; 15(5):38–45. doi:10.1109/MIC.2011.76.

    Article  Google Scholar 

  • Kaji, S, Cohen M. Hmd-presented virtual reality with personal and social spatial sound In: Kazuyoshi, Mori, Shunsuke Yamaki, editors. Proc. 305th SICE (Society of Instrument and Control Engineers) Tohoku Branch Workshop. Aizu-Wakamatsu: 2016. p. 305–3.

  • Kojima, H, Cohen M. Unity-developed interface for spatial sound conferencing featuring narrowcasting and multipresence with network control In: Kazuyoshi, Mori, Shunsuke Yamaki, editors. Proc. 305th SICE (Society of Instrument and Control Engineers) Tohoku Branch Workshop. Aizu-Wakamatsu: 2016. p. 305–1.

  • Kostakos, V, Ojala T. Public displays invade urban spaces. IEEE Pervasive Comput. 2013; 12(1):8–13. doi:10.1109/MPRV.2013.15.

    Article  Google Scholar 

  • Matsumura, S. Pd Recipe Book— Introduction to Sound Programming with Pure Data (in Japanese). Tokyo: BNN; 2012.

    Google Scholar 

  • Memarovic, N. Understanding future challenges for networked public display systems in community settings. In: Proc. 7th Int. Conf. on Communities and Technologies. Limerick: ACM: 2015. p. 39–48. doi:10.1145/2768545.2768559. ISBN 978-1-4503-3460-0

    Google Scholar 

  • Milgram, P, Coquhoun Jr H. A Taxonomy of Real and Virtual World Display Integration In: Ohta, Y, Tamura H, editors. Mixed Reality: Merging Real and Virtual Worlds. Chap. 1. Secaucus, Berlin, Heidelberg: Springer-Verlag: 1999. p. 5–30.

    Google Scholar 

  • Milgram, P, Kishino F. A taxonomy of mixed reality visual displays. IEICE Trans Inf Syst. 1994; E77-D(12):1321–9.

    Google Scholar 

  • Olsen, VS. Alice 3 Cookbook. Birmingham: Packt Publishing Ltd; 2011.

    Google Scholar 

  • Pedersen, ER, Sokoler T, Nelson L. Paperbuttons: expanding a tangible user interface. In: DIS: Proc. 3rd Conf. on Designing Interactive Systems. New York: ACM: 2000. p. 216–23. doi:10.1145/347642.347723.

    Google Scholar 

  • Pulkki, V. Virtual source positioning using vector base amplitude panning. J Aud Eng Soc. 1997; 45(6):456–66.

    Google Scholar 

  • Pulkki, V. Generic panning tools for Max/Msp. In: ICMC: Proc. Int. Computer Music Conf. Munich: 2000. p. 304–7. Preprint 4463 (I6).

  • Pulkki, V, Lokki T, Rocchesso D. Spatial effects In: Zölzer, U, editor. DAFX: Digital Audio Effects. 2nd ed. Wiley: 2011. p. 139–84. Chap. 5.

  • Ranaweera, R, Cohen M. Gestural Interface for Conducting Virtual Concerts. IEEJ Trans. Electron Inf Syst (C). 2016; 136(11):1567–73. doi:10.1541/ieejeiss.136.1567.

    Google Scholar 

  • Rekimoto, J. Organic interaction technologies: From stone to skin. Commun Acm. 2008; 51(6):38–44. doi:10.1145/1349026.1349035.

    Article  Google Scholar 

  • Sasaki, M, Cohen M. Dancing Music: Integrated Midi-Driven Synthesis and Spatialization for Virtual Reality. In: AES: Audio Engineering Society Conv. San Francisco: 2004. Preprint 6316 (R-3).

  • Silva, JM, El-Saddik A. Exertion interfaces for computer videogames using smartphones as input controllers. Multimedia Sys. 2013; 19(3):289–302. doi:10.1007/s00530-012-0268-y.

    Article  Google Scholar 

  • Sra, M, Schmandt C. Expanding social mobile games beyond the device screen. Pers Ubiquit Comput. 2015; 19(3-4):495–508. doi:10.1007/s00779-015-0845-0.

    Article  Google Scholar 

  • Steenblik, RA. Chromastereoscopy In: McAllister, DF, editor. Stereo Computer Graphics and Other True 3D Technologies. Princeton: Princeton University Press: 1993. p. 183–95.

    Google Scholar 

  • Streitz, N, Kameas A, Mavrommati I. The Disappearing Computer: Interaction Design, System Infrastructures and Applications for Smart Environments. Berlin: State-of-the-Art Survey, Springer LNCS 4500; 2007. doi:10.1007/978-3-540-72727-9.

    Book  Google Scholar 

  • Tsukida, N, Ryskeldiev B, Cohen M. “Lights, Camera, Action!”: ambient lighting extending photospherical display. In: Proc. VRCAI: Int. Conf. on Virtual Reality Continuum and Its Applications in Industry. Kobe: 2015.

  • van Dam, A. Visualization research problems in next-generation educational software. IEEE Comput Graphics Appl. 2005; 25(5):88–92. doi:10.1109/MCG.2005.118.

    Article  Google Scholar 

  • van den Hoven, E, van de Garde-Perik E, Offermans S, van Boerdonk K, Lenssen K-MH. Moving tangible interaction systems to the next level. Computer. 2013; 46(8):70–6. doi:10.1109/MC.2012.360.

    Article  Google Scholar 

  • Wham-O. Frisbee®; Freestyle: Jam Like a Pro™. 2010. DVD video.

  • Weiser, M, Brown JS. Designing calm technology. PowerGrid J. 1996;1.01.

  • Weiser, M, Brown JS. Beyond Calculation— The Next Fifty Years. Chap. 6 In: Denning, PJ, Metcalfe RM, editors. New York: Copernicus (Springer-Verlag): 1997. p. 75–85. http://www.ubiq.com/hypertext/weiser/acmfuture2endnote.htm.

Download references

Acknowledgements

This project has been supported in part by a grant from the Social Sciences and Humanities Research Council of Canada.

Author information

Authors and Affiliations

Authors

Contributions

MC was the architect of this project, and the academic advisor of the student contributors. RR contributed to the cve development, including its iOS interface. BR helped develop the iOS and Android mobile applications. TO developed the Alice 3 mixed virtuality scenes. AH helped develop the padiddle and poi rigs. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Michael Cohen.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cohen, M., Ranaweera, R., Ryskeldiev, B. et al. “Twhirleds”: Spun and whirled affordances controlling multimodal mobile-ambient environments with reality distortion and synchronized lighting to preserve intuitive alignment. Sci Phone Appl Mob Devices 3, 5 (2017). https://doi.org/10.1186/s41070-017-0017-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41070-017-0017-x

Keywords