Can Ray Tracing be used to improve audio in games too?



So far we have talked about Ray Tracing applied to graphics but not applied to audio. In this article, we will tell you what it consists of and what differences and similarities it has with its graphic variants, as well as what it can contribute to the video games that we play on our PCs.

One of the things that are sold the most in terms of technology is the so-called three-dimensional sound, which is based on positioning the sounds that are produced in the scene according to the position of the viewer or player, but always taking into account a physical speaker system.

The idea of ​​Ray Tracing applied to audio is very simple, to use the ray-tracing algorithm not to know the path of light photons but of all sound waves in an environment and the way in which they reach the viewer. All this in a much more precise and less limited way than the speaker systems used in the cinema.

Audio ray tracing: sound waves instead of photons


Ray Tracing Audio follows the same principles as its variant for graphics, but instead of representing the journey of photons in a scene, the same algorithm and principle is used to represent the journey of sound waves in an environment.

So we talk about representing how sound travels in three-dimensional space and we can use the same model as with ray tracing that represents the travel of light in three-dimensional space but with sound.
  • As with light, we can assign energy to sound, the difference with light is that the energy, in this case, is lost with distance.
  • We can assign the ray generation shader (reflections, shadows) to also generate audio rays.
  • As with graphics rays, audio rays can bounce and behave in one way or another depending on the nature of the objects it interacts with.
The last point is important, because when a sound wave leaves one medium to enter another, such as when a sound bounces off a wall, the sound wave changes. A portion of it will try to pass through the medium while another will be bounced by the object and this process will vary depending on the type of material with which it interacts.

Not all sound waves are important


What interests us in the case of Ray Tracing Audio is only what the player can hear in the scene, any ray that because it is far or does not reach the player's area of ​​influence would be discarded.

There's no point in moving amidst a cacophony of sound that would completely drive the player insane and completely destroy the immersion in the game.


The best way is for the GPU to simply calculate the path from the original source to the player's position in the scene. Contrary to its graphic variant, we would not speak of generating new waves in the scene by a rebound, but rather its distribution that would depend on its energy, which would translate into the intensity with which we hear said sound, if the intensity it is too low then it is discarded.

In ray tracing via graphics we have to take into account the entire trajectory of the photon and how it "paints" the scene that it travels by combining its colors with the existing ones, in the case of ray tracing via audio we only have to take into account the end of the sound path and whether it affects the player or not.

This implies that when the audio beam passes through a part of the scene far away from the player, the audio would not be generated but would continue and only the sound that would occur in an area close to the player would be processed.

GPUs for Ray Tracing Audio


First of all we are going to need a GPU to calculate the path of the sound waves, so its use completely affects the performance of the graphics card, since we are using the specialized hardware this to calculate the intersection of the rays with the objects to create a new type of ray but with the difference that it would not manipulate the color values ​​of an object or group of objects but rather that it would manipulate the audio of the scene.

The problem is that this would mean that the audio tracks should be processed via shader programs, which means that the audio tracks with which they would have to work would be extremely small and therefore should lose detail and resolution if they were run from Compute Units. since these have very few Kilobytes.

Audio tracks are loaded as textures, keep in mind that the caching system of a GPU does not load a huge texture, but if you load an audio track you will need at least integrated hardware fast enough to decompress audio tracks on the fly and therefore could be one more additional component.

3D headphones and immersion


In the market, we have seen the recent appearance of 3D headphones whose peculiarity is that they have integrated three-dimensional positioning elements that allow the listener to be placed in the middle of a virtual scene.

This type of headphones will benefit a lot from Ray Tracing Audio, for example in a future game we can turn the head, see how our character turns it slightly, with this that the position of the camera also rotates and capture the sounds according to the new position of objects according to the camera. The idea for the gameplay will be that the idea of ​​having a simulated 7.1 in which the "cinema experience" is carried out is over, but rather that each and every one of the sounds in the game would be represented not by a system of Limited virtual speakers, as each of them will have its source sound source represented for greater precision and immersion in the game.

The changes required in the current hardware to be able to enjoy Ray Tracing Audio are really zero and its requirements are much lower than when rendering graphics. As for the video game industry, it will not cost anything to apply them, that if, it can be an excuse for the manufacturers of headphones to create and sell new models.

Post a Comment

0 Comments