Process: Recreating Sleep Paralysis

Media: Oculus Rift, Neurosky Mindwave, Inflatable Suit

Software: Unity, OpenFrameworks, Arduino

The Virtual Bedroom

Structuring the virtual space with that in mind, the user begins the simulation with a virtual avatar in bed and the user’s perspective is from that avatar’s position with the only movement control of looking around the room. The room is designed to feel claustrophobic with no windows, only a bed and lights, as shown in below.


The next component I tackled was implementing my narrative to the environment in alignment with the subject of the simulation. Leaving off with hovering sleep demons in my earlier prototypes.

Screen shot 2014-12-09 at 10.06.55 PM

I aimed at a more authentic to my experience using a layout stemming from the Vertical Slice prototype of a sleep demon sitting on top of the sleeper. This layout was inspired by some of my own experiences, and Henry Fuseli’s painting, “The Nightmare.” As the original reference with my personal experience, the Vertical Slice prototype framed the positioning of the demon and the goal of the sleeper’s interaction with that demon.


Image source:

In this prototype, I moved the camera-view to a third person perspective to see more of the interaction, and used polygons as representatives to focus on the mechanics of what will happen in the scene. The sleeper figure below is my user, and a crumpled demon is the entity causing paralysis.





In this prototype the Demon generates in a different position on top of the user every time the simulation is played. The Demon cannot be seen, but only its shadow can be seen until the user’s mouse is over the Demon. In Fig. 13, only the shadow of the demon can be seen hovering over the sleeper.


Fig. 13: Shadow of Demon

When the user has the mouse over the demon, the demon will appear as shown in Fig. 14 below. This mechanic gives the user a clue that something is there with the shadow and visible, the user must find the demon with this clue.


Fig 14: Mouse-over Demon Revealed

The interaction can be examined in Fig. 15.

Fig. 15

Once the user identifies the object with the mouse-over, the user can press on the mouse and drag the object off to free the sleeper. This prototype solidified the placement of the demon, and the interaction of giving feedback when the user looks at an object.

The Vertical Slice prototype, case studies, and my references of sleep demons, encouraged me to change the placement of the sleep demon to communicate a reason for the sleeper being unable to move and feeling chest pressure, therefore, altering the position of the sleep demon to hold down the sleeper. Similar to most of my own experiences with sleep paralysis and case studies examples, the final sleep demon setup is shown in Fig. 16 where a demon holds down a sleeper to converse the idea to users with or without sleep paralysis experience.


Fig. 16

Refining Focus

Earlier in the project I aimed to create an analog meter to give visual feedback to the user of his/her attention level.   Originally I aimed to map both attention and meditation, but found it could confuse the user, so I concentrated on just the attention data set to master in translating for the user. Using a Point of View light mapped to the user’s head rotation, the size and the spotlight was mapped to the user’s attention levels. The light source originated from the user’s head giving him/her guidance in the virtual room. In Fig. 17 the spotlight has a smaller diameter and lower intensity, compared to Fig. 18 where the user’s attention levels are higher.

Screen shot 2014-12-09 at 10.05.37 PM

Fig. 17

Screen shot 2014-12-09 at 10.05.59 PM

Fig. 18

The user’s first feedback tool, the Point of View Spotlight originating from the user’s looking range was a key component to set the mood of the simulation, so I added a blur effect to the camera to aid in the creating consistency with the user’s convoluted view of a dreamlike virtual world. The blur effect is controlled by the user’s attention level, so if the user’s concentration is low, the user’s camera view will be blurred as shown in Fig. 19.


Fig. 19

If the user’s attention is high though, then the camera will be clear as in Fig. 20, therefore reinforcing the user to concentrate in order to see the environment.


Fig. 20

Since the user must concentrate, or the spotlight and vision will become blurry and dark, the design encourages the user to keep his/her attention levels high.

The simulation begins with an introduction scene that allows the user to become accustom to the Point of View Spotlight meter. The scene allows the user to look around at the static environment and concentrate on their Point of View Spotlight without any distraction from the objects in the environment. Audio from the Instructional Therapist plays at the start of the scene to guide the user to concentrate, in order to make the Point of View Spotlight grow and clear the blur. The Instructional Therapist’s voice and video will also be used in the installation prompt, so it maintains consistency. The prompt waits for the user to complete a concentration threshold cycle before allowing him/her to proceed to the next level of turning on the lights. During this scene the user can only hear the sound of his/her concentration, which changes frequencies based on the user’s attention level to give the user feedback of how high he/she is concentrating.

Stop, Look, Concentrate!

Evolving the demon appearing on mouse-over idea from my Vertical Slice prototype, I translated that concept to ray-casting from the sleeper’s perspective, so the when the user is not looking at the demon, it is invisible in Fig. 21, but the demon is visible when the user looks at it in Fig. 22.

Screen shot 2014-11-06 at 7.33.26 PM

Fig. 21

Screen shot 2014-11-06 at 7.39.51 PM

Fig. 22

The concept of the viewer looking at an object and receiving confirmative feedback translated to building a look and concentrate narrative with the lights.


Turn On The Lights!

Teaching the user to concentrate became my next concern; my previous prototypes inserted the user into the virtual environment too quickly without leading the user through how to interact with it. The solution was to add lights to the simulated room that the user could focus on, and acclimating the user to the mechanic of looking at and concentrating which are the main controls of the simulation. Adding queues for the user to look at each light in a sequential order became the next task, so I added a flicker to each light, until the user turned it on by looking and focusing. The flicker begins with the lantern on the user’s right, and then the standing spotlight on the user’s left, as shown in Fig. 23 and Fig. 24.


Fig. 23


Fig. 24

When the user looks at the light and concentrates the sleep demon’s head appears on the opposite side of the user’s viewpoint to scream at the user because the lights weaken it, as shown in Fig. 25 and 26.


Fig. 25

The sleep demon’s head is programmed to smoothly follow into the user’s perspective at the opposite edges of the frame to create a leering glare at the sleeper.


Fig. 26

Once the user turns on each light, the flickering stops and the sleep demon’s head disappears. However, by turning on the lantern, the user reveals the hungry ghost hovering above in Fig. 27.

hungry ghost

Fig. 27

Turning on the standing spotlight reveals the sleep demon holding the sleeper down, as its shadow is seen in Fig. 28 and sitting before the sleeper in Fig. 29.


Fig. 28

Screen shot 2015-05-14 at 2.26.57 AM

Fig. 29

Now the user must use the looking at and concentrating mechanic to defeat the sleep demon.

Too Many Demons

Former prototypes had sleep demons procedurally generate during the simulation as in Fig. 30 to create a claustrophobic atmosphere.

Screen shot 2014-12-09 at 10.06.55 PM

Fig. 30

This idea translated into the application of the hungry ghost demon, which would procedurally generate during play. However I found that the hungry ghost demon was not upholding its duty to cramp the room and act as a distracting demon since it was too far out of range for the user to interact with being placed behind the sleep demon near the ceiling. The hungry ghost was added as a cultural reference to Chinese mythology about the underworld of Eastern ghosts and ghost oppression, which is a reference for sleep paralysis in East Asian culture. Ethnically being part Chinese, this folklore affected my view on sleep paralysis and the superstitious connotations that it inflicted. The hungry ghost would grow every time the user’s attention hit a high threshold, and shrink at a low threshold. During the scene, four more hungry ghosts would instantiate in the room to conjure a claustrophobic feel of the room filling with ghosts. I used four because that number is associated with death in Chinese culture. Even so, the functionality of the hungry ghost was ineffective in the stream of my narrative since it was overshadowed by the shadow demon and the room size, so I removed it from the simulation.

Destroy The Demon

Concepts I drew from last semester for destroying the demon were to look at it, head growth, and particle explosion. Fig. 31 and 32 were earlier visions of the shadow demon’s head growth.

Screen shot 2014-12-09 at 8.45.01 PM

Fig. 31

Screen shot 2014-12-09 at 8.43.58 PM

Fig. 32

Originally the demon would explode into particles, as in Fig. 33, if the user looked at it and the demon’s head grew to a certain threshold.

Screen shot 2014-12-09 at 8.44.30 PM

Fig. 33

In the Final outcome after both lights are turned on, the demon will become fully visible to the viewer. While the demon is visible, I refocused the interaction to create a similar interactivity to turning on the lights, as killing the demon. While the demon is visible, its appearance flickers like the lights. It emits a flickering demon sound to encourage the user to look at the demon to stop the negative feedback. Then I created a heart for the demon that would glow based on the user’s attention level, as in Fig. 34 and 35 below.

Screen shot 2015-05-14 at 2.25.42 AM

Fig. 34

Screen shot 2015-05-14 at 2.27.02 AM

Fig. 35

The user could only see the heart glow if the user looks at the sleep demon and tries to raise his/her attention levels, however when this action occurs the sleep demon’s head grows to distract the user, and the sleep demon screams, as in Fig. 36.

Screen shot 2015-05-14 at 2.27.10 AM

Fig. 36

While the head grows, it leads the user to look at the demon’s head instead of its heart, so the user is distracted from the key to destroying the demon. In order to destroy the demon the user must concentrate while looking at the demon’s heart. The heart will glow based on the user’s attention level, so when the glow hits an attention threshold and completely turns on the demon heart’s glow, the demon will explode. Once the user looks at the demon and reaches a concentration threshold, he/she must keep looking at it until the demon’s heart is fully lit which triggers the destruction of the demon. The metaphor of the user facing his/her fears under distress to destroy the demon is communicated by focusing his/her concentration despite all the negative feedback thrown at the user.


When the demon is destroyed particles are released, as in Fig. 37 to mimic an explosion and then user’s virtual body sits up.

Screen shot 2015-05-14 at 7.38.48 AM

Fig. 37

The room’s lights become fully lit to let the user know the sleep paralysis state is over, as in Fig. 38

Screen shot 2015-05-14 at 8.46.02 AM

Fig. 38

At this point the simulation ends and audio instructions are given to exit the simulation. The previous semester’s demo video in Fig. 39 has evolved significantly.

Fig. 39

The final second semester demo video teaches the user the look and concentrate mechanic in the Oculus headset, and can be reviewed in Fig. 40.

Fig. 40

On Experiential Road

Continuing into the exhibition show with Anamorphic Agency, I began to redirect towards the experiential side of the project to create an exterior feel to the user’s physicality while in the virtual reality simulation. I began to refine how to translate the aspect of sleep paralysis where the user feels pressure on their body during the event. An all too familiar feeling that I have experienced myself, I began researching the ways to communicate this sensation. When I had left off last semester I steered in the direction of using a weighted blanket or weighted vests and weighted bands for the wrists and ankles as in Fig. 41, an earlier floor plan.


However the issue of how to gradually increase or decrease the weight on the vest and bands posed a problem, if the user was already wearing these articles. Using a mechanism to pull the weighted items off the user was a solution, but building a device to do so was overly complicated.

Therefore I began exploring other avenues of adding pressure and immobility upon the user, and came across inflatable suits. Usually used for sumo wrestler costumes, I remembered the air pressure and the immobilization I felt in the suit. The feeling of immobility translated to a physical object that could be controlled by a fan motor made the inflatable suit a perfect candidate to solving the experiential desires for Anamorphic Agency, as shown in Fig. 42.


Fig. 42

Since a small motor controls the fan of the suit, I hacked the motor in Fig. 43, so I could control the power with an Arduino.


Fig. 43

Using the technology from my previous project where I translated Attention data from the brainwave sensor into the power of an air pump, I implemented that system into the inflatable suit. Essentially the suit inflates when the user is not concentrating and deflates when the user is concentrating. Supporting the notion of negative feedback for not focusing by immobilizing the user with the pressure and the mass of the suit, but reinforces positive feedback when the user is focusing by deflating the suit, releasing pressure and mobilizing the user with the mass decrease, as in Fig. 44.

Fig. 44

A shortcoming of my inflatable suit however, was that the serial data form the brainwave sensor was being sent directly from the openFrameworks receiving program, so the motor did not turn off when the virtual simulation ended. This concern will need to be addressed when I build for the exhibition.


Crafting For Conferences

Evaluating of what was learned and improved on from conference feedback also contributed to my final demo, but mainly shaped my exhibition installation. IndieCade taught me to streamline my system so demoing it would be easier, and to add more to the narrative and feedback for the user to react to each element in the environment. Fitting users with the Oculus headset and explaining how the technology worked as I put on the Neurosky sensor became more fluid with practice. Different Games attendees, as seen in Fig. 45, gave me the idea of adding sound to the user’s attention levels, so that the user has feedback for his/her attention increasing or decreasing. The negativity of the light flicker feedback tested at Different Games led me use it effectively to guide the user out towards looking at objects that stop the flickering. Also steered to the idea if using exterior speakers for the installation because of the spikes in loud noises were deafening during play.


Fig. 45

At PlayTech playtest event I found users needing more guidance on what to do when the simulation began, so pushed me towards adding audio that would be fitting in guiding the user. Fig. 46 displays the suit at Playtech, but I concluded to not use the inflatable suit in exhibition due to difficulty putting it on. However, the same experiential elements will be translated into the exhibition.


Fig. 46


Unity Integration

Knowing the inflatable suit’s shut-off issue approaching the set-up of the Powertail Switch, I opted to purchase Uniduino to integrate Unity communication to Arduino, so that the motor will turn off with the end of the simulation to keep the simulation consistent. I translated my original Arduino sketch to receive Uniduino communication, which used Standard Firmata. It worked well once I figured it out, and I was able to turn the script off that runs the physical component when the demon is destroyed in the simulation to keep consistent with the physical and digital components of the project.

Moving the Arduino component to Unity prompted me to continue to import the brainwave data receiver to Unity as well, and install a plugin for the Neurosky brainwave sensor. However with the Unity project running so many other components, the plugin worked, but made the project overly glitch. With this unwelcomed outcome, I decided to keep the data import on the openFrameworks side, and will run it simultaneously with the Unity build. This setup is not ideal, but it keeps the data stream into the Unity environment smooth, so user’s experience will not be broken by technical overload.


Exhibition Woes

Although inflatable suit was a great fit for demoing the experiential aspect of the project, it proved to be a challenge to implement in the exhibition show. Creating a space for the user to come in and put on the suit and then the rest of the head equipment for the experience was a taxing request as shown in Fig. 47.


Fig. 47

Keeping a backup solution in mind for an exhibition setup, I decided to build a sleeping bag enclosure embedded with an inflatable mattress pad to invoke the same technology, as shown in Fig. 48.


Fig. 48

However in this setup, since more voltage will be required for in the 120V air pump, I used a Powertail Switch II that connects to an Arduino. The air pump plugs into the switch and an LED embedded in the switch controls power to the pump. An Arduino controls the brightness of that LED, and the data from the user’s attention levels are ported into the brightness of that LED, as in Fig. 49.


Fig. 49

The back-up installation plan in Fig. 50 demonstrates the set-up for a user to walk into the space. The exhibition setup will require a maximum of 5 feet by 8 feet. A reclined therapy chair will be in the space with a helmet enclosure attached. In the helmet the Neurosky, Oculus will be enclosed, so the user can insert their head in a single motion. Speakers will be embedded into head area of chair, so the sound won’t deafen the user. An inflatable mattress will be embedded into sleeping bag and will inflate and deflate based on attention levels, similar to the inflatable suit.


Fig. 50

An instruction demo, as in Fig. 51, will play when the user walks into the space and triggers the motion sensor. That will play the video on the TV in the background that will give some context to the Anamorphic Agency, as a fictional test conducted with recordings of my sleep paralysis experience.

Fig. 51

The narrator in Fig. 52 is my fictional sleep therapist conducting the test. The video guides the user through getting into the sleep chamber and exiting after the simulation has ended.


Fig. 52


Demo Day Redemption

The inflatable suit will be used for demo day however, because the translation of data and experiential feel is more fluid in the suit. The back TV screen will have live feed of the user’s perspective. The layout will be similar to the exhibition, but I will be there to administer the suit, so it is feasible, as in Fig. 53.


Fig. 53

Last Save

Coming to the realization that creating a standalone piece for my exhibition was plainly not feasible with virtual reality technology still new to the public, so allowing a smooth flow of testers would require a person to demo and explain the equipment.  Also the stability failure of using a heavy duty movable desk to support the 20-pound demoing computer contributed to final decision.  Therefore I decided to post demo times for users to test the simulation with me present to aid in putting on and getting into the equipment, and running the demo.  This choice allowed for smoother transitions of users, and troubleshooting any equipment issues during running the simulation.

Since the project installation was up for an entire week, I also programmed a demo mode to simulate how the sleeping bag chair would inflate and deflate during the simulation that would automatically run every fifteen minutes to demonstrate how the device functions.  A demo video plays to demonstrate how the technology of inflation and deflation controlled by the user’s attention levels works in the simulation.


The final exhibition set up is below:


Exhibition Video: Coming soon!


Realizing Anamorphic Agency from a standalone demo to an experiential installation has transformed it throughout the final semester. Focusing on the narrative story and the experiential elements aided the project in translating a transparent message to the users through the interaction mechanics of the piece. Missteps during the first semester led to better outcomes throughout the end, and pausing to recharge the project after the first semester allowed for reflection to push the final piece. Complications with installation have been a major set back for exhibition, but will shine through at the end with better ways to instrument a space for Anamorphic Agency.


Leaving with the impression on users of entering a disempowered virtual space, and exiting with gaining empowerment in that space connotes Anamorphic Agency’s success. The inspiration of sleep paralysis was a catalyst to stimulate a structured narrative for the user to interact in. More elements of sleep paralysis could be added to the virtual simulation and experiential install, but Anamorphic Agency concludes with opening a space for inspired ideas of designing virtual realities translation and control within those realities. Using negative or traumatic events to enter a constricted design path, and designing those narratives with transparent interactions will lead to elevated approaches in virtual simulation design. Mastering design for those restricted structured narratives sets a scaffold to design for unstructured, openly participatory narratives where I initially began with Anamorphic Agency.


Special thanks to my previous instructors Kan Yang Li, Nick Fortugno, Chris Prentice, Barbara Morris, Colleen Macklin, Joe Savaadra, Robert Yang for helping me develop my ideas. Also a special thanks to all my classmates that helped with the code and/or concept for this project: Pierce Wolcott, Chris Ray and Gabriel Gianordoli. And thanks my 3D modelers Kamille Rodriguez for the characters and Aaren Grace for furniture assets.

Shown at Different Games 2015:

Shown at Parson’s MFADT Thesis Exhibition 2015:

Shown next week at Parson’s MFADT Thesis Exhibition 2015:

Latest Show Posts at:

Leave a Reply

Your email address will not be published. Required fields are marked *