Spatial Memetics: Defining Communicative Forms for Intelligent Spaces

 

Spatial Memetics is a new research field underway at the Spatial Sound Institute, Budapest. The concept was premiered at an exploratory spatial listening workshop at Me Convention 2018, Stockholm.

As we edge closer to an era of seamlessly integrated, immersive media, our built environments begin to take on a new presence: that of “intelligent” space.

Enabled and informed by an emerging set of sensory media — spatial sound, haptic feedback, mixed reality, bio-reactive wearables, among others — radical new possibilities become available for interaction between a room and the people within it. An exciting new field beckons: defining the syntactic and semantic contours of an emerging cultural continuum. What new types of exchange and awareness might open up in such an environment? Can we expand our scope of sensing and sensitivity, of meaningful experience, beyond the bounds of current linguistic forms?

Spatial Memetics enables participants to intuitively express complex and nuanced ideas using spatial media- spatial sound, mixed reality and haptics. It proposes a set of syntactic symbols, or runas, as building blocks for a ‘living’ (as in non-static) language integrating sonic, visual and haptic media. The ultimate aim is to establish a trans-sensory language set- sounds have shapes, shapes have vibrational patterns, vibrational patterns have sounds- that evolve and morph, via principles of semantic association outlined in research, according to the thoughts, emotions and movements of the participant.

The intention with the project is to open up a new paradigm of communication in multimedia environments, that not only enables complex expressions currently unavailable, but that also triggers other states of sensing and feeling inaccessible in waking state consciousness, offering more possibilities for creative association and empathy, as two outcomes.

Spatial Memetics was conceived through collaborative exchange between Berlin-based artists John Connell and Noah Pred. Initial output of the project to follow in late 2019.