Controlling sound has long captured the imagination in science fiction and fantasy. In *Dune*, the cone of silence lets characters speak privately in public settings. In *Blade Runner 2049*, eerie billboards murmur ads directly into passersby’s ears. In reality, architecture—by design or accident—can shape how sound travels. For instance, in the National Statuary Hall of the U.S. Capitol, a whisper can travel across the room due to how sound waves reflect off curved surfaces.
Today, scientists are exploring how to control sound with precision. The goal? A future where sound can be privately delivered without earbuds. But it’s no easy task. Human hearing spans 20 to 20,000 hertz—frequencies that scatter easily, making eavesdropping possible. In 2019, researchers tried to direct sound using lasers that convert light into sound when absorbed by water vapor in the air. However, when the laser beam was still, sound could be heard all along its length. A rotating mirror helped focus the audio, but the sound quality remained poor.
Another strategy uses ultrasound. Though inaudible to humans, ultrasonic waves can help generate audible sounds through a phenomenon known as nonlinear interaction. When two ultrasonic waves intersect, they produce new waves: one at a higher frequency and one at a lower frequency—the difference between the two. This lower-frequency wave can fall within the human hearing range. A similar effect happens when water hits hot oil: the tiny steam explosions release ultrasonic waves that mix in the air, creating the familiar sizzle.
In the 20th century, the U.S. military harnessed this principle to build directional speakers. Companies like Holosonics later commercialized the tech, allowing sound to be projected along narrow paths. But like laser-based methods, these devices don’t provide true privacy—sound is still audible along the beam’s path.
Recently, however, researchers have made a breakthrough: private “audible enclaves.” “It’s like wearing an invisible headset,” says Yun Jing, an acoustics expert at Penn State. His team described the technology in a March report in the *Proceedings of the National Academy of Sciences*. In these experiments, a person standing in one specific spot can hear sound clearly, while someone just a step away hears nothing at all.
The secret is acoustic metasurfaces—engineered materials with tiny repeating structures that control sound in ways natural materials can’t. “A metasurface acts like a lens that’s thinner than the wavelength of the sound it controls,” explains Michael Haberman, a mechanical engineer at the University of Texas at Austin. Just as optical lenses bend and focus light, acoustic metasurfaces can shape and steer sound waves.
Jing’s team used 3D printing to create panels with zigzag air channels. By adjusting the length of each channel, they could curve ultrasonic waves along specific paths. They then placed thin sheets of this metasurface over two speakers to bend their ultrasonic beams toward one another. Where the beams met, nonlinear interactions converted them into audible sound—audible only at that precise location.
“The sound quality isn’t great; we used a $4 transducer,” Jing admits. “But this is just proof of concept. And it works.”
While we haven’t reached the level of Dune’s cone of silence, this technology hints at a future where private conversations could take place in open settings—no headphones or wires required. Libraries, offices, and public spaces might one day be filled with these “audible enclaves,” each delivering a unique audio stream to just one person at a time.