crossmodal equivalences [2020]
Crossmodal Equivalences is both an installation artwork and a research tool. This work was exhibited at the Experimental Media and Performing Arts Center (EMPAC) on the Rensselaer Polytechnic University campus January 13-17, 2020. This work consists of a single compositional structure broadcast over a radio network to thirteen different user stations—each of which interpreting the structure and mapping it to the body through correlated light, sound, and vibration. These stations each provide a unique configuration of custom-built stimulus fixtures providing a unique blend of stimulation techniques. For this exhibition, a user experience survey was administered to participants in order to determine the relative effects of each configuration on specific dimensions of conscious experience, such as alterations to self-awareness, the sense of embodiment, the perceived passage of time, and the dissolution of the boundaries between the senses. For more information about this experiment, please download the dissertation: Crossmodal Equivalences: Multisensory Composition and the Induction of Non-Ordinary Experience.
bundles of attributes [2017]
This generative work explores and exploits the principles of how the mind groups sensory stimuli into perceptual wholes. Known as Gestalt principles of organization, these laws determine how a diverse array of sensory input will be sorted in our conscious experience. Applied to sonic material, these principles predict that different acoustic events which are presented with spatial or temporal proximity, similar spectral content, similar spatial trajectory, or which share a similar spectral or amplitude envelope will be perceived as emanating from the same object. In this work, several rapid streams of related sonic fragments are simultaneously played around a 4-channel speaker array. The software is designed to continuously sort these streams of sonic content by certain attributes (timbre, duration, pan location, etc) and, using various probability distributions, alter the values of these attributes. What results is an endless shifting sea of chance correlations between sonic streams. In synchrony with the audio, visual stimulation in the form of flickering light is presented to closed eyelids. This correlation is also being altered frequently, and, again predicted by the Gestalt principles, chance cross modal-bindings between light and sound cause the observer's perception to momentarily fuse the sources together. This piece has been created in an attempt, through promoting perceptual engagement, to assist the observer into a state of wordless cognition.
This piece was composed for mmpc–a multisensory environment created in collaboration with Jeremy Stewart. Designed to be adaptable to accommodate various configurations of audio, visual, and tactile hardware, this device serves as both an immersive environment for the production of multi-modal artworks and as a means of studying the phenomenological effects of combining various stimulation techniques across the senses. For this work, mmpc is configured with a 4-channel speaker setup and subwoofer, tactile transducer mounted to a chair, and a custom-built LED array panel positioned in front of the user's face. This panel is intended to be viewed through closed eyelids and presents uniform color fields and flickering lights–techniques demonstrated to induce geometric pseudo-hallucinations and other pronounced effects in most subjects.
This generative work explores and exploits the principles of how the mind groups sensory stimuli into perceptual wholes. Known as Gestalt principles of organization, these laws determine how a diverse array of sensory input will be sorted in our conscious experience. Applied to sonic material, these principles predict that different acoustic events which are presented with spatial or temporal proximity, similar spectral content, similar spatial trajectory, or which share a similar spectral or amplitude envelope will be perceived as emanating from the same object. In this work, several rapid streams of related sonic fragments are simultaneously played around a 4-channel speaker array. The software is designed to continuously sort these streams of sonic content by certain attributes (timbre, duration, pan location, etc) and, using various probability distributions, alter the values of these attributes. What results is an endless shifting sea of chance correlations between sonic streams. In synchrony with the audio, visual stimulation in the form of flickering light is presented to closed eyelids. This correlation is also being altered frequently, and, again predicted by the Gestalt principles, chance cross modal-bindings between light and sound cause the observer's perception to momentarily fuse the sources together. This piece has been created in an attempt, through promoting perceptual engagement, to assist the observer into a state of wordless cognition.
This piece was composed for mmpc–a multisensory environment created in collaboration with Jeremy Stewart. Designed to be adaptable to accommodate various configurations of audio, visual, and tactile hardware, this device serves as both an immersive environment for the production of multi-modal artworks and as a means of studying the phenomenological effects of combining various stimulation techniques across the senses. For this work, mmpc is configured with a 4-channel speaker setup and subwoofer, tactile transducer mounted to a chair, and a custom-built LED array panel positioned in front of the user's face. This panel is intended to be viewed through closed eyelids and presents uniform color fields and flickering lights–techniques demonstrated to induce geometric pseudo-hallucinations and other pronounced effects in most subjects.
neither breaking nor expounding [2014]
Submitted as the practical portion of my master's thesis, neither breaking nor expounding is a multi-sensory installation incorporating visual, auditory, and tactile stimulation techniques intended to allow for non-conceptual modes of experience. The visual stimulation is presented to closed eyes through a custom-built headset with embedded RGB LEDs and consists of both ganzfeld techniques (delivered via an array of uniformly colored peripheral light channels surrounding the eyes) and photic stimulation in the form of independently flickering red, green, and blue channels positioned directly in front of each eye. Tactile stimulation is delivered via small vibration motors attached by straps to the forearms and wrists and provide both continuous and pulsed vibrations. Auditory stimulation is presented through headphones and consists of electronic drones which are decorrelated using both a convolution/allpass filter method and short delays. All aspects of this work are controlled using custom software built in Max, with an Arduino interfacing with the lights and motors.
The formal structure of this work begins as a multi-modal ganzfeld session, offering a uniform color field, continuous motor vibrations, and a steady, low-frequency drone. Over the course of the first half of this work, flicker is introduced into the color field, with pulses in the motors at a sub-harmonic frequency to the flicker. Simultaneously, a low-pass audio filter is swept upwards, revealing higher frequency content associated with the drone. This is done gradually to slowly pull the subject into the experience. As the piece progresses, subtle changes to high frequency decorrelation, to the color scheme of the color field, and to rate of pulse in the motors/flicker occur. Additionally, abrupt changes in decorrelation of the low frequency content, and rate, waveform, and/or duty cycle of photic and tactile stimulation occur. The shifts in audio decorrelation (through internalization and externalization of sound) and vibrations (through the reduction of tactile sensation due to sensory adaptation) are intended to aid in dissolving the perceived distinction between the subject's body and the environment. The mapping of parameters of the different modes of stimulation are intended to force a cross-modal binding, potentially allowing for a visceral, non-conceptual experience similar to synesthesia. The source material for each mode of stimulation is intentionally abstract and minimal to reduce the burden on the subject in bracketing out reference to source or association. The visual stimulation techniques employed have been demonstrated in both clinical and artistic settings to induce pseudo-hallucinations, an altered sense of time, and changes in proprioception. All of these techniques are combined in this work with the intention of a synergistic effect which may allow the subject to slip into altered perceptual states. For a more detailed description of this work, the written thesis, Multi-Modal Ganzfeld and Non-Discursive Aesthetic Experience, is available here.
Notes: This 9-minute version is a proof-of-concept rather than a full piece. The video presented here serves merely as documentation of the work and does not reflect the subjective effects of experiencing it firsthand. This piece relies on binaural cues, so it is requested, if possible, that it be listened to with headphones.
Submitted as the practical portion of my master's thesis, neither breaking nor expounding is a multi-sensory installation incorporating visual, auditory, and tactile stimulation techniques intended to allow for non-conceptual modes of experience. The visual stimulation is presented to closed eyes through a custom-built headset with embedded RGB LEDs and consists of both ganzfeld techniques (delivered via an array of uniformly colored peripheral light channels surrounding the eyes) and photic stimulation in the form of independently flickering red, green, and blue channels positioned directly in front of each eye. Tactile stimulation is delivered via small vibration motors attached by straps to the forearms and wrists and provide both continuous and pulsed vibrations. Auditory stimulation is presented through headphones and consists of electronic drones which are decorrelated using both a convolution/allpass filter method and short delays. All aspects of this work are controlled using custom software built in Max, with an Arduino interfacing with the lights and motors.
The formal structure of this work begins as a multi-modal ganzfeld session, offering a uniform color field, continuous motor vibrations, and a steady, low-frequency drone. Over the course of the first half of this work, flicker is introduced into the color field, with pulses in the motors at a sub-harmonic frequency to the flicker. Simultaneously, a low-pass audio filter is swept upwards, revealing higher frequency content associated with the drone. This is done gradually to slowly pull the subject into the experience. As the piece progresses, subtle changes to high frequency decorrelation, to the color scheme of the color field, and to rate of pulse in the motors/flicker occur. Additionally, abrupt changes in decorrelation of the low frequency content, and rate, waveform, and/or duty cycle of photic and tactile stimulation occur. The shifts in audio decorrelation (through internalization and externalization of sound) and vibrations (through the reduction of tactile sensation due to sensory adaptation) are intended to aid in dissolving the perceived distinction between the subject's body and the environment. The mapping of parameters of the different modes of stimulation are intended to force a cross-modal binding, potentially allowing for a visceral, non-conceptual experience similar to synesthesia. The source material for each mode of stimulation is intentionally abstract and minimal to reduce the burden on the subject in bracketing out reference to source or association. The visual stimulation techniques employed have been demonstrated in both clinical and artistic settings to induce pseudo-hallucinations, an altered sense of time, and changes in proprioception. All of these techniques are combined in this work with the intention of a synergistic effect which may allow the subject to slip into altered perceptual states. For a more detailed description of this work, the written thesis, Multi-Modal Ganzfeld and Non-Discursive Aesthetic Experience, is available here.
Notes: This 9-minute version is a proof-of-concept rather than a full piece. The video presented here serves merely as documentation of the work and does not reflect the subjective effects of experiencing it firsthand. This piece relies on binaural cues, so it is requested, if possible, that it be listened to with headphones.