Music News

Fi Sullivan's Music Intersects Science, Coding and Pop Music

Fi Sullivan is a Denver producer and singer who uses data from the natural world to compose electronic pop.
Fi Sullivan is a Denver producer and singer who uses data from the natural world to compose electronic pop. Fiona Sullivan
As of a week ago, singer and producer Fi Sullivan wasn’t sure if she would be able to participate in some significant career milestones after a devastating bike accident left her in the ICU. However, last week she was given the green light to participate in a marathon of creative engagements, which included performing three nights on an interactive stage at Bonnaroo and a headlining gig tonight, June 24, at Globe Hall in support of her new EP, Shades of Forest.

Sullivan’s compositions find themselves at the crossroads between high-tech and the natural world. A Thomas J Watson Fellow with a degree in computer science and music, she traveled the world integrating her studies with her creativity. While pop-inspired and accessible, her music is augmented with code that warps, stretches and bends the timbre and underlying mathematics of the planet into tones that are lush, challenging and ephemeral. Leading the compositions is a three-octave vocal range that undulates and pulses with the beats, delivered in ways reminiscent of the chaotic order of the Anthropocene.

Shades of Forest is especially inspired by nature. She initially started the EP during her fellowship, where she researched “human vocal continuity at the intersection of music and technology” by traveling the world to learn more about how the human voice exists, evolves and extends in different forms throughout time, technological innovation and cultures. Through the fellowship, Sullivan lived in a diverse array of cultures, with stays in Europe, Australia, South America and the Arctic Circle, where she experienced many different kinds of forests — urban, rural, frozen, rock and tropical. When COVID escalated, she was forced to cut her explorations short and return to Colorado. She spent a lot of time in the coniferous woodlands of the Rockies, thinking about the forests she had wandered through the previous year.

Westword caught up with Sullivan just as she got to Bonnaroo, and talked with her about how technology influences her compositions, the human voice, and sound in general.

Westword: What was the creative process like for this EP?

Sullivan: All my compositions start with mind mapping and a sound — the imagination and visualization of the sounds I physically and mentally gather — all living, layering and interacting with the natural environment and humans in new worlds, their own worlds. My songs start with a dream scene that appears in my mind, either before or while I’m playing guitar or jamming in Ableton on my computer. I usually find the chorus or main dance moment first, then the vocals and lyrics come intermittently in waves as I try to describe my soundscape and feelings. I usually enter a feverish flow of creative energy while songwriting and producing; it’s hard for me to pull away.

On your new EP, how do you integrate technology into the productions, beyond your DAW (Digital Audio Workstation) and standard plug-ins?

I was exploring and researching certain algorithms and ideas during the writing periods for a couple of my songs on the new EP that then influenced the characteristics and sounds of the songs. Then the natural environment’s influence organically appears in my sound and compositions.

“West Water” was initially composed during my evolutionary and analytical art research period when I was a research assistant to Professor Andrews’s Middlebury College Analytical Arts Lab. The original “West Water” piece was generative and evolving — unraveling slowly and patiently in an Ableton Live session piece that lasted ten to twelve minutes. I played live with an improvising saxophone friend and also used “The Cave” vocal patch in Max MSP. The lyrics came later, from a time rafting in Utah down the Westwater Canyon. I was thinking of the sandstone tower evolution.

For “Shades of Forest,” at the time I was exploring notch theory and algorithms, as well as algorithmic rave at the Australian National University in Canberra while researching as a Thomas J. Watson Fellow in 2020. I was there during the tragic bushfires, and had to wear a P2 Haz mask every day, tape my windows and place a wet towel under my door to keep the smoke out. The piece takes influence from the bushfires' growth and movement, and delay between spikes in the data from the bushfires' growth and movement, and the all-encompassing give and take of fire. “Shades of Forest’ has this call-and-respond interaction delay between its spikes.

About your thesis: How does the intersection of technology and the human voice create continuity in music and sound art?

The intersection of technology and the human voice creates continuity in music and sound art because technology allows the human voice to expand into new realms and forms of sound that can be continued and evolved as technology itself continues and evolves. I’m obsessed with sound — the way it so instantaneously appears and disappears with grandiosity or subtlety; the way it is sculpted as this invisible medium to convey intense emotion. It’s magical to me. This is why the intersection of technology and the human voice is so fascinating to me — the idea that this sacred, inherent instrument that all humans hold with them at all times can be layered, delayed, harmonized and transformed into new dimensions of sound is thrilling.

Is there such thing as vocal continuity, naturally?

Yes! Many forms and senses. There is vocal continuity through vocal traditions and extended vocal techniques such as overtone singing, harmonic singing, throat singing, Kulning [First Nation] song lines. All of these vocal techniques have continued over time, passed through generations as forms of art but also survival and play.

What are some of your favorite pieces of music technology, and how do you use them in novel ways?

Max MSP is by far my favorite piece of music technology, because you can dream up anything and figure out a way to create it in the Max MSP environment — it’s so beautiful! I love to use Max MSP for developing generative music, visuals and sound synthesis. I also love to use it for building my own generative digital vocal instruments that I can use in live performance or for production.

My vocal instruments tend to be inspired by the natural world. [I call them] biomorphic digital vocal instruments. A favorite of mine is a patch I created called “The Cave,” which allows me to improvise with my voice in a randomized cave-like soundscape. I love this for live performance, because I will improvise a line and this vocal line will then come back to me ten minutes later, after having been modified through the system, and then I can sing and improvise with myself. I also use MaxMSP to create generative instruments. My favorite has been an instrument called “weather pattern” that I use to control sounds in Ableton during live session performances by transforming Denver weather data into sound.

Is there a particular evolution in technology that you think has yet to be applied properly to music?

I would love to see wearable technology evolve and interact with music more — I’ve always been obsessed with Imogen Heap’s MiMu gloves. It would be amazing to see this type of wearable technology — even the attachment of an accelerometer to a guitar, drumstick or jacket sleeve — become more accessible and seamlessly integrated into live performance, even classic rock concert settings. The thought of your hand movements and gestures being your instruments and sound control is so cool! Wow, it would be incredible!

What are your thoughts on generative music?

I'm excited for so many reasons — generative soundscapes for 4D sound experience or sound art installation; video game music; generative music for theater or film; generative music as an improvisational environment, like another musician with you on stage. I even saw the coolest research at Ars Electronica in Linz, Austria, being done on using machine learning to communicate with birds.

Do you believe that algorithms might one day take over music composition at the same level that algorithms take over music selection?

I truly believe the computer will never replace the human musician. Sorry, computers, I love you, but music needs the human heart and soul — not to be cheesy, but it’s true. I wouldn’t want musicians to be replaced by generative music out of monetary or operational convenience; that would be tragic. I envision and hope to see generative music as a tool to help human composition and creation, yes. But I do believe the human composer will always have the artistic, creative and beautiful edge that algorithms will not, so algorithms will not necessarily take over. I hope algorithms will mainly become tools to help humans compose and create.

Shades of Forest is out on all platforms. Fi Sullivan headlines at Globe Hall, 4483 Logan Street, on Friday, June 24; tickets are $15. 
KEEP WESTWORD FREE... Since we started Westword, it has been defined as the free, independent voice of Denver, and we'd like to keep it that way. With local media under siege, it's more important than ever for us to rally support behind funding our local journalism. You can help by participating in our "I Support" program, allowing us to keep offering readers access to our incisive coverage of local news, food and culture with no paywalls.