REturn to the Blog
  • Event Retrospective: ICMC 2025

The International Computer Music Conference is one of the leading conferences for computer music researchers and composers, and one of our audio programmers had the chance to present his paper during its 50th annual event. Bilkent shared his thoughts on Acoustic Wave Modeling Using 2D Fine-Difference Time-domain (FDTD), proposing a novel 2D framework that simulates sound propagation as a wave-based model in Unreal Engine.

What is Acoustic Wave Modeling?

Accurate sound propagation simulation is essential for delivering immersive experiences in virtual applications. Yet, industry methods for acoustic modeling often do not account for the full breadth of acoustic wave phenomena. With the need to provide a solution to retain spatial directionality, Bilkent explores how his proposed 2D FDTD solver framework can bring physically grounded acoustics to interactive 3D environments in applications like Unreal Engine.

To better understand what this means for audio programmers, we chatted with him to gain some insight into the research from his paper and how he’s implemented some of these practices into his work.

Q&A

How long did you develop the 2D FDTD framework for your paper?

The framework was developed over roughly six months, encompassing the implementation of the 2D finite-difference time-domain (FDTD) solver, boundary conditions, geometry parameterization, and integration with Unreal Engine for visualization and real-time interaction. A significant portion of that time was also dedicated to numerical stability testing, performance optimization, and validating the results against analytical benchmarks. 

What about your approach provides a nuanced take on dynamic sound rendering? 

Unlike traditional ray-based or pre-baked methods, my approach treats sound as a wave phenomenon by capturing diffraction, interference, and low-frequency pressure interactions in real-time. By embedding a numerical wave solver directly within Unreal, it bridges physically grounded acoustics with interactive game environments. This allows for sound propagation that changes dynamically with the scene geometry and materials, rather than relying on static impulse responses or simplified occlusion models. 

What are some ways high-fidelity acoustic rendering can be beneficial for games and VR/AR? 

High-fidelity acoustic rendering enhances spatial immersion by aligning auditory cues with the actual physical structure of the environment. For example, players perceive depth and scale more accurately when reflections and diffusion correspond to real geometry. Subtle material-dependent effects (e.g., cloth vs. metal reflections) provide implicit environmental storytelling. 

In VR/AR, it supports embodied presence, where sound behaves consistently with the user’s motion and head rotation, reinforcing spatial coherence.  

Ultimately, it moves audio design closer to environmental simulation rather than artistic approximation. 

How does having accurate low-frequency sound modeling support high-fidelity acoustic rendering? 

Low-frequency waves have longer wavelengths and thus interact with large-scale geometry by bending around corners and filling spaces through diffraction and modal behavior. Traditional geometric acoustics often neglect these effects, leading to unnaturally “thin” or dry environments. 

Accurate low-frequency modeling ensures energy continuity across complex spaces and preserves the acoustic weight and realism of enclosed environments, which is especially important for room modes, explosions, and large environmental ambiences. It also helps maintain physical plausibility in dynamic scenes, where reflections and occlusion evolve continuously. 

How have you been able to adopt this process into your own work? 

The process has deeply influenced how I design acoustic systems in interactive media. I’ve started integrating wave-based simulation concepts into game-audio toolchains, such as Unreal-Wwise pipelines, by creating hybrid reflection models that combine FDTD-derived data with runtime spatialization systems. 

It also inspired the foundation of my current smart acoustic point system, where pre-baked regions store acoustic properties learned from simulation, allowing efficient runtime evaluation without full wave solving. 

What is one takeaway you’d like people to have from your paper? 

That physically grounded sound simulation doesn’t have to be prohibitively complex—it can be made interactive, visual, and educational. By merging numerical methods with game engines, we can both advance the realism of virtual acoustics and make the underlying physics intuitively accessible to sound designers, researchers, and players alike. 

  • Stay tuned for our upcoming posts

Be sure to follow along