DETAILS


COLUMNS


CONTRIBUTIONS

a a a

REAL-TIME INTERACTIVE GRAPHICS

Vol.32 No.3 August 1998
ACM SIGGRAPH



Intelligent Virtual Worlds Continue to Develop



Glen Fraser
SOFTIMAGE Inc.
Scott S.Fisher
Telepresence Research Inc.


August 98 Columns
Images and Reversals CG Around the World


Scott S.Fisher
Previous article by Scott S.Fisher Next article by Scott S.Fisher


Glen Fraser
Previous article by Glen Fraser Next article by Glen Fraser

We’ve always imagined that the computer could become a fantastic tool for building real-time virtual worlds. Early examples were compelling until we realized that, no matter how many polygons and textures were included, there needed to be more to it than just endlessly navigating around a static 3D space at 30fps. Next step? Add in AI, Artificial Life, Avatars...

The vision of combining these capabilities in various permutations to create evolving, interactive, computer-generated virtual worlds — and populating them — is slowly coming into focus. Invested with new dimensions of symbolism and meaning as these worlds begin to extend their presence throughout our digital network, they’re gradually approaching the kind of fantastic animated cybernetic systems described by science fiction writer Orson Scott Card in his Ender Wiggins Series: “intelligent” virtual worlds that respond to the user and persist through a lifetime.

A critical challenge in the elaboration of these intelligent virtual worlds will be the development of environments that are as unpredictable and rich in interconnected processes as an actual location. And, in addition, their virtual inhabitants will need to be smart enough to learn about the user and to evolve accordingly. As always, these developments have been pioneered through the disparate efforts of many, including Karl Sims, Pattie Maes, Joe Bates’ Woggles World, Naoko Tosa’s Neurobaby, and of course, Christa Sommerer and Laurent Mignonneau’s interactive environments (to name just a few).

In this column, we take a glimpse at three unique intelligent virtual worlds. We’re not trying to present a comprehensive “past, present and future” of the area but rather than focusing on a single project, we want to present a broad range of developments. We want to give some sense of the diversity, richness and exciting possibilities offered by interactive artificial worlds.

First we present Menagerie, a project from Telepresence Research that was an early effort to develop an interactive virtual world that responded to the user’s activities in real time. Originally commissioned by the Centre Georges Pompidou in 1992, Menagerie was also the first immersive virtual environment experience to be exhibited in a major public art museum.

Secondly, we present Rebecca Allen’s virtual environment framework, Emergence. This system generates an active, responsive, networked virtual world that provides the basis for an artistic installation piece entitled The Bush Soul, to be shown as part of the SIGGRAPH 98 Interactive Art — Emerging Technologies exhibition.

Gravity is a 25-person San Francisco based software company with roots in virtual reality that develops breakthrough 3D technology and applications. Their Terra software offers the user a beautiful, organic, dreamlike world to explore. In our final section, Zak Zaidman — one of Gravity’s founders — presents this fascinating new project.

For additional perspectives on these exciting new developments, the 13th Biennial European Conference on Artificial Intelligence is sponsoring a special Workshop on Intelligent Virtual Environments in Brighton, U.K., this summer. For more information, see their website.

And finally, we are always looking for interesting projects and ideas to present in future columns. If you are involved in an interesting real-time interactive project that you think might be suitable for our column, please contact one (or both) of us.

— Glen Fraser, Scott S. Fisher

Menagerie: Prototype for a Responsive Virtual Experience

Scott S. Fisher
Telepresence Research, Inc.


Project Website

Figure 3: The Bush Soul: The Inner World.

Figure 4: The Bush Soul: The Outer World.
Figures 1 and 2: Viewpoint from Menagerie, by Telepresence Research, Inc., 1993.

Menagerie is an immersive virtual world that is inhabited by virtual characters and presences specially designed to respond to and interact with its users. It was designed to investigate the potential of developing an interactive environment that could provide a more compelling experience of presence and immersion for its users than previously developed virtual spaces. At the time, these environments typically displayed static 3D architectural models that could be navigated in real time but that offered little else after it was quickly explored by an expectant user within a few minutes. Our premise was that a user’s sense of presence in a real or virtual space is participatory — and can be increased through complete and interactive immersion in an environment that is highly responsive, rather than indifferent, to the user.

The Virtual Experience

Inspired by the award-winning computer-generated animation, Eurythmy, by computer artists/scientists Susan Amkraut and Michael Girard, Menagerie was a collaborative attempt with them to make an experience similar in feeling to Eurythmy, but with the addition of stereoscopic 3D graphics and the capability to dynamically change and respond to the user in real time. The experience allows a visitor to become visually and aurally immersed in a sparse, monochromatic 3D computer-generated environment that is inhabited by many virtual animals. The animals enter and exit the space through portholes and doors that materialize and dematerialize around the viewer.

As a user explores the virtual space, they encounter several species of computer-generated animals, birds and insects that move about independently, and interactively respond to the user’s presence in various ways. For example, if the user moves toward a group of birds gathered on the ground, the birds might take off and swirl around the user with realistic flocking behavior, fly off into the distance and return to the ground in another location. Several four-legged animals will approach the user with different gaits and behavioral reactions. The visitor might also turn toward the 3D localized sound of other animals as they follow from behind. See Figures 1 and 2.

There are more than 12 different basic scenarios of animal groupings that the user could encounter while navigating through the endless Menagerie world. And, since the character motions are calculated in real time, their activities and interactions are never repeated — the world is different each time the user interacts with it. In terms of experience quality, Menagerie succeeds in giving a user the unique feeling that this simple world has a life of its own and that their actions are distinctly affecting the outcome of events happening around them. Of course, sense of presence is a difficult, and maybe impossible, dimension of experience to quantify in an actual or a virtual space. But our experience with Menagerie suggests that a user’s immersion and sense of presence is directly proportional to, and enhanced by, the world’s responsiveness.

Menagerie was first installed in the Galeries Contemporaines of the Centre Georges Pompidou in Paris as part of the exhibition “Real-Virtual” by Scott Fisher in January 1993. This was part of a larger, ongoing exhibit to present new developments in art and technology at the Pompidou called “Revue Virtuelle” and was curated by Martine Moinot, Christine van Assche and Jean-Louis Boissier. More recently, in August 1993, this experience was installed in the Tomorrow’s Realities Gallery exhibit at SIGGRAPH 93 in Anaheim, CA.

Display Hardware and System Configuration

The visual interface to this virtual world is a display device called the BOOM (Binocular Omni-Orientational Monitor), a head-coupled, rather than head-mounted, display. The BOOM is a counterbalanced CRT-based stereoscopic viewing device which enables interactive, real-time viewpoint control in a 3D environment generated by computer or camera. Using this device is very similar to using a pair of binoculars that provide a movable, wide-angle window into the virtual space. It incorporates very wide field-of-view optics and two independent CRT displays packaged together as an integrated viewing head with user hand grips and buttons for simple viewpoint manipulation.

The sounds in Menagerie are generated by a very high-speed DSP system (Crystal River Engineering’s Beachtron) capable of presenting real-time spatialized 3D sound cues over headphones. When integrated with the head-coupled viewer, the sound sources can be stabilized in fixed locations or in motion trajectories relative to the user and in synch with other virtual objects and characters in the environment. For example, in Menagerie, the sounds designed by composer Mark Trayle are associated with each animal as they move around the virtual space. Also, additional localized sounds are displayed in correspondence with the opening and closing of virtual doors through which the animals and birds travel. If a door opens behind the user, she can hear its sound behind her and turn to watch this new scenario unfold.

This virtual environment experience is presented by means of a real-time, computer graphics system. In this case, the role of the computer (or computers) in generating these worlds is to coordinate and enable several interlinked processes. First, it receives data from the various real-time peripheral devices that sense the user’s actions and inputs to the system. Then, it processes this data and develops an appropriate response to the user’s inputs or actions. In this installation, the response is in the form of image and sound information provided in turn by specialized image rendering hardware and sound synthesis devices. The images are generated at a rate of 30 to 60 times per second for each eye in order to create and maintain a seamless and complex 3D image surround in response to the user’s actions. The original installation presented a wireframe representation of the virtual world and its inhabitants generated by a Personal IRIS workstation from SGI. Later implementations provided a polygon based environment with simple texture mapping enabled by a more powerful SGI Crimson Reality Engine system.

Graphics and Interaction Software

The main Menagerie simulation software and world design was developed by Michael Girard and Susan Amkraut of Unreal Pictures. This experience is one of the first to explore the development of interactive characters in a virtual environment. Following is a description of their objectives and algorithms for developing the project:

“We know how an animal moves, not just what it looks like. In our computer simulations of ‘virtual’ animals, the geometric representations are deliberately designed to be simple in order to emphasize the motion of the animals, rather than the details of their appearance. For us, the essential expression is in the abstraction of the motion, and what it suggests to the imagination of the viewer. The motion of the animals is modeled with computer programs that simulate the physical qualities of movement. Many of the techniques employed are inspired by the robotics field. Legged animals respond to simulated gravity as they walk and run in various gaits. They are able to spontaneously plan footholds on the ground so that they appear to be dynamically balanced. Birds and other flying creatures accelerate when flapping their wings and bank realistically into turns. Flocking and herding algorithms direct the patterns of flow for large groups of animals. All animals maintain a degree of autonomy as they adaptively alter their motion in response to their surroundings, avoiding collisions with both other animals and the virtual environment user. Animals may follow general goals, such as ‘walk along any path from door X to door Y’ or ‘fly toward region Z and land on any unoccupied spot on the ground.’ However, their precise movements are unpredictable since they depend on the constantly shifting circumstances of interaction between each of the animals and the user.”

The Menagerie project was produced by Scott S. Fisher, Telepresence Research, Inc., San Francisco, CA, in collaboration with Magic Box Productions, Inc., Beverly Hills, CA.

Emergence and The Bush Soul

Rebecca Allen
Chair, Department of Design, UCLA


Project Website

Figure 3: The Bush Soul: The Inner World.
Figure 3: The Bush Soul: The Inner World.


Figure 4: The Bush Soul: The Outer World.
Figure 4: The Bush Soul: The Outer World.

Emergence

Emergence is a PC-based software system that supports an active, responsive, networked, virtual world. This system was designed by Rebecca Allen and a team of UCLA computer science and design students for the creation of interactive art.

The Emergence system includes a unique 3D engine to handle the rendering and display of virtual worlds and a physics-based behavior system that enables complex behaviors and interactions between all objects in the environment. People who enter this world are represented as avatars. Avatars, as well as computer-generated environments and autonomous animated characters, are rendered in real time as three dimensional, texture-mapped objects.

Through a behavior scripting language one can utilize techniques of artificial life to specify behaviors and relationships between characters and objects. Sounds, such as voice, music or ambient effects, are linked to objects and characters in the environment to enhance the sense of life and space. The Emergence system allows for the creation of muliparticipant virtual worlds that are “alive,” responsive and interactive. Complex social environments can evolve from the interaction of simple behaviors.

Emergence supports multiple camera viewpoints. Each inhabitant, including the avatar, can be viewed from a first or third person perspective. A camera is treated as another character and can be directed to follow the avatar. This third person perspective allows one to observe the avatar while attached to it.

We have developed a client/server model that enables the networking of environments. The server handles changes to the world geometry and movement of objects, sending updates of the world model to the clients. Clients render user-dependent views into the world, and receive user input.

Our primary focus, however, is on the design of movement as it relates to the animation of avatars and other forms of artificial life, as this is most important for the impression of life and interaction within the world. Therefore, one of the major requirements of the Emergence system is to handle hierarchical motion data from articulated 3D models, including motion-captured animation.

We are creating living, abstract worlds. The designs of characters and other environmental objects are derived from their behaviors and their style of movement. Fluid and “natural” movements take precedence over realistically modeled characters and environments.

Scripting Life

We have also developed a scripting language that allows one to design behaviors and relationships between characters and objects. Using techniques of artificial life, the unique behavior of virtual inhabitants emerge from a set of parameters that are defined through the scripting language. In the case of the avatar, its behavior is also affected by joystick control input from the participant.

The computer-controlled inhabitants of the virtual world are designed with interactivity in mind. Each of the characters in the virtual world has a set of parameters that govern their behavior towards other inhabitants. Using the scripting language to manipulate a set of parameters, a character can be endowed with “feelings” towards any object in the world. These feelings drive a character’s movements and affect its reaction to an inhabitant when in its vicinity.

As an example, a character may like an avatar, follow it and interact with it, but it may also fear another character and will flee when it sees that character approaching the avatar. In addition, it may be part of a group and will occasionally go off to join its group. As in real life, a complex social environment can be achieved from the interaction of relatively simple behaviors.

In order for the artificial life to “see and hear” other inhabitants and maneuver through the space, each agent has vision and sound sensors and a range of individual characteristics that control features such as dynamics. All parameters can be customized for each agent creating diversity within the world.

Any inhabitant can also be a member of a group. This allows for the development of emergent group behaviors. Emergence occurs when, acting under local rules and local information, a collection of individuals appears to act as a centrally controlled group. In our world, emergent group behaviors include flocking, following and fleeing. Characters belonging to a group still retain their individual feelings towards other inhabitants.

In addition, an inhabitant’s behaviors and feelings can adapt over time. For example, a participant’s avatar can harass a character that likes it by getting in its way, following it closely or acting too aggressively, causing that character to gradually dislike the avatar. Or a character could come to life or change its animation only after an avatar interacts with it in a certain way. As in real life, feelings and resulting behaviors can change based on interactions or due to time-based events.

Procedural behaviors applied to articulated 3D models have proved to be a powerful method for invoking the sense of an alive, complex, social and interactive environment.

Virtual Performance

Another strong area of interest is in the exploration of new forms of virtual performance art. As part of this exploration, we have recorded substantial amounts of motion capture data of performance movements and other human motion.

Motion data is applied to various 3D models and is then layered with a virtual performer’s overall set of behaviors, creating complex, somewhat unpredictable performances. Through scripting, a performer can react and interact with the audience and other performers in ways that are not possible in real life.

The Bush Soul

The first work to be created with this system is called The Bush Soul. The Bush Soul is an interactive art work that explores the role of avatars, human presence and social relationships in a world of artificial life. This work is being presented in Orlando, FL, as part of the SIGGRAPH 98 Interactive Art — Emerging Technologies exhibition.

In a virtual world, the avatar becomes our other body. But what part of “us” is in our avatar? In West Africa there is a belief that a person has more than one soul and that there is a type of soul, called the “bush soul,” that dwells within a wild animal of the bush.

An avatar can serve as a place for the bush soul. The avatar carries a person’s bush soul into the “virtual bush” by following the guidance of a human participant. But, like the wild animal, it is also “alive” with its own set of behaviors. This creates a situation in which one does not have absolute control over the behavior of one’s avatar. At times the avatar will make its own decisions, so one must learn to understand the avatar’s behaviors and to occasionally relinquish control.

Through the avatar a person enters a world that encourages exploration, participation and the development of relationships. In this work every object in the environment is instilled with some form of artificial life. Relationships can be formed between all elements.

Activities and events emerge depending on relationships and interactions between avatars and artificial life forms. Such events include experimental performances, narratives and music that are non-linear in structure.

The Bush Soul experiments with forms of communication that rely on symbolic gestures, movements and sounds. With a focus on the “life” of the virtual environment, this work examines the role of artificial life and human presence in an art form that includes the interactive experience.

Software development is partially funded by a grant from Intel Corporation.

Terra: An Organic Virtual World

Zak Zaidman
Gravity, Inc.


Project Website

Figure 5: Terra. Figure 6: Terra. Figure 7: Terra. Figure 8: Terra. Figures 5 to 8: Scenes from Terra.

Beautiful, organic and immersive, Terra marks the culmination of a five-year experiment in creating convincing 3D nature simulators for home computers. Terra is also one of a growing number of applications that represent the birth of a genuine new interactive software category.

What Is It?

Terra’s vision was originally conceived by Gravity in 1994 and has been a creative work in progress ever since. Terra was recently developed for Intel as a showcase application for the 740 3D chip. It features a fantastic realistic naturescape of unprecedented beauty.

The application starts underwater where you are represented as a dolphin. In perfect response to mouse or keyboard inputs, the dolphin turns, swirls, goes up and down as the camera loosely follows and tracks its movements. This “elastic chase” plane creates beautiful cinematic effects as the dolphin swims away from, towards and past your view in different angles and directions. Tuning this effect was paramount to achieving the intuitive yet complex effect you get as you become a dolphin. It takes just a few minutes to learn how to navigate around whether you are an experienced, or totally novice, user.

As relaxing music and underwater sound effects fill the background, you notice a host of other creatures swimming about you. Dolphins and certain schools of fish seem to want to hang around you while other creatures like sea turtles and mantas mind their own business or even turn away antisocially as you approach them. Because this experience is entirely simulated on the fly, nothing is ever the same. Each moment is uniquely created by your presence in a world that responds both to itself and to you. For example, you can follow sea turtles close enough to see their eyes blinking but not too close to scare them away or you can swim slowly enough so as to attract as many friendly companions. You can speed around alone through kelp forests and coral reefs and even swim up to the surface as fast as possible in order to jump out of the water in graceful flips.

Just when you think you’ve seen it all, you head towards the surface at full speed, not stopping as you reach the surface which you break through to see yourself literally become transformed into a seagull. Geometry morphs and other lighting effects are utilized to create the effect of a magical transformation in which during a few seconds you see your flippers grow out to become wings as your nozzle shrink back into a beak. Almost instantaneously you are now above water soaring over the surface of the translucent water through which you can see dolphins and fish swimming about in patterns that are both random and still responsive even to your movements now as a seagull. If you get close enough to the water you’ll see a dolphin follow you underwater and then jump right up besides you.

Other seagulls fly about you. The scenery is a tropical island that is both real and straight out of a dream. Waterfalls create translucent rainbows, colorful palm trees line the hillsides, and a couple of giant pre-Columbian heads jut out from the sandy beaches where flamingos walk about. As you fly past them the flamingos lift off as swirls of airborne grains of sand are created by wings flapping close to the ground. From a high enough altitude you look out into the open sea that seems to go forever.

Because Terra was created first and foremost as a proof of concept and showcase, users are given control not only over their movements but also over environmental variables such as time of day, lighting, fog and other rendering and graphical effects. As the day progresses, and you cause the sun to lower, it paints everything with a hint of pink and orange. At night everything is colored by a cool bluish moon light. Above water a full moon and stars, including an occasional shooting star, fill the sky.

A Brief History…

Four years ago we brought a 3D simulator (known as Bug!) to SIGGRAPH 94. The PC running Bug! was a meager Pentium 90 with no special hardware acceleration. Flat shaded and cartoony, the only cool thing about Bug! was its 30 frames per second and the fact that it actually gave users the feeling of being a bug! But encouraged by a week of ooohs and aaahs we came home filled with optimism, enthusiasm and big plans. Next we’d turn the bug simulator into a full fledged title, we’d create an immersive underwater world, flight simulators featuring all kinds of birds instead of jet fighters, then countless others.

Bug! Became Banzai Bug - The Flight Sim with an Attitude, an action/adventure game that hit the shelves just in time for Christmas 1996. And indeed we did create underwater worlds and eagle flight sims. But until our work with Intel to develop Terra as a showcase application for its 740/AGP technology, each attempt was a frustrating reminder that until very recently images that were possible on affordable home computers were never anything like those we pictured in our heads.

With 10,000 polygons rendered per frame (over 50,000 in the visible world) and up to 40 megabytes of textures viewable at the same time thanks to the AGP bus, our D3D engine would finally be used to create a truly beautiful and organic world.

What’s Next?

Above all, Terra proves that organic and beautiful virtual experiences are possible on today’s home computers. Hardware and software advances have finally made it possible to do justice to our imagination. We’ve found that Terra captivates users not only with its beauty, but also with the dreamy almost meditative state it seems to induce in all who venture into it. If a proof of concept can have this effect on novice and expert users alike, indeed there is much reason to believe we will be seeing many more such products in the near future. Continuing our tradition of working on advancing both technology and content, Gravity will continue to push forward its vision by incorporating its own technological breakthroughs into software products that:

Scott S. Fisher is a media artist and producer whose work focuses primarily on stereoscopic imaging, immersive first-person display environments and 3D books. Currently, he is Managing Director of Telepresence Research, Inc., a production company focusing on the art and design of virtual environment and remote presence experiences, and Visiting Professor in the School of the Arts and Architecture at UCLA.



Glen D. Fraser is a computer engineer with a passion for virtual reality and other forms of real-time visual computing. He currently works at SOFTIMAGE, developing interactive viewing and animation tools.

The copyright of articles and images printed remains with the author unless otherwise indicated.

  • Place the user inside realistic computer generated experiences that simulate different dimensions and levels of reality in unprecedented beauty and detail.
  • Give users experiences that are impossible in the real world but that simulate, relate to and are about the real world.
  • Are as captivating and engaging as popular video games.
  • Allow users unlimited freedom of movement and exploration fostering a kind of exploration, contemplation and even learning.

Since Terra, we’ve been working on state-of-art scene management technology that will allow us, and the development community at large, to develop 3D content of unprecedented visual complexity, beauty and realism. In a word, we’re going for organic. To Gravity, “organic” means rich, dense, detailed environments that lack unnatural regularity which to the human eye appears synthetic. Organic worlds have been problematic to traditional scene management algorithms because they lack certain types of geometric patterns, as capitalized on by other so called “engines.” Our technological advances have broken a barrier in this regard, both conceptually and algorithmically.

Nature simulators that feature magical and realistic perspectives of our own Earth will lead the way. Different products will highlight different ecosystems and environments including tropical rain forests, desert environments, Arctic and Antarctic environments, etc. Next will come Inner Worlds. Unlike Terra which focuses on animals in nature, Inner Worlds focuses on miniaturizing the perspective of users so that it is possible to enter and explore the insides of things that are otherwise impossible to perceive from a human scale. Products will feature navigating freely through the inside of a human body, the inside of a cell, the inside of an atom, etc. Finally, distorting space and time in the opposite direction, other products will allow users to navigate freely through the universe hopping around from planet to planet, moon to moon, through the Asteroid belt and Saturn’s rings, around Jupiter and by the Sun, and eventually to other intergalactic and intragalactic wonders.