Charente (France) is a land of creativity, with numerous studios and schools linked to the film (especially animation) and video game industries. Less well known is Angoulême’s “audio” expertise with G4F, which navigates nationally and internationally between these traditional industries and XR. Expert insight from Vincent Percevault, co-founder of G4F.
Who is G4F?
Vincent Percevault – G4F is celebrating its 15th anniversary this year (founded in 2008), and it’s much more than just a studio! We are now a full-fledged group dedicated to working with sound, with five divisions:
- G4F Prod is the best-known studio entity, entirely dedicated to interactive audio production for video games and XR.
- G4F Localisation is the French branch of Native Prime, a European group (present in Germany, Spain, Italy and France) that I created with European associates, specializing in dubbing in some 30 languages.
- G4F SFX for sound libraries, aimed at sound editors and sound designers, with packages available on dedicated marketplaces.
- G4F Records, which publishes and distributes soundtracks for the projects we support (mainly games).
- G4F Talents, talent agency close to the label, with legal services, composer support, casting, motion capture…
V. P. – I come from the world of video games, starting out at Cyanide Studio (after my studies at the CNAM-Enjmin in Angoulême). I had more of an IT background, with classical training at a conservatory, and a real passion for sound, which was when MAO (Computer-Assisted Music) started. It was my encounter with video games that drew me into these interactive worlds, and eventually convinced me to open my own studio to host them. The first name was “Game Audio Factory”! Today, we’re a staff of 20, most of them sound designers – which makes G4F one of Europe’s biggest studios in this field, with expertise in the latest technologies in the sector (we’re the French ambassador for Audiokinetic’s Wwise middleware, our studios are equipped with Dolby Atmos, we have various facial motion capture systems for dubbing…) which enable us to work in animation, VR, video games…
V. P. – With this technological package, we are in great demand for platform-to-platform porting – and with all the real-time game engines and software known to audiovisual production. Although the majority of our customers remain French, we are now in demand from teams all over the world. Most of our customers are medium-sized studios (in France and Europe) that don’t have the capacity to internalize this part of their production. With our solutions, we offer complete management of the post-production phases, providing our customers with support and autonomy.
V. P. – We started working for VR with Les Poissons Volants back in 2017 (THE WEDDING AT CANA BY PAOLO VERONESE, LAS MENINAS BY DIEGO VÉLASQUEZ) for ARTE. And our gaming expertise has enabled us to work on more interactive audio and dubbing titles. UNIVERSAL WAR 1, MANDALA, PAPER BEAST… And then WOLVES IN THE WALLS for the Quest port, in liaison with the Meta team to integrate the work on their platform. More recently, we have accompanied JAILBIRDS for Digital Rise, several experiments for Small Creative, Mathieu Pradat’s THE ROAMING…
V. P. – Immersive or more traditional content, it’s all the same to us. However, we like to innovate and work on different types of experience. I’ve been teaching at CNAM-Enjmin about sound engines for almost 20 years: when I started, it seemed like a very limited subject! Today, software has multiplied (Wwise, FMOD…). Interactive audio is our DNA. Getting involved in VR or other immersive experiences means continuing to innovate with all these technologies. I’d even add that we’re currently seeing a convergence with certain practices such as object mixing (object-oriented, notably for Dolby Atmos) in the linear sector. VR is a continuation of our activities, with research and development to adapt: and we’ve been doing it ever since Youtube started offering 360-degree broadcasting!
V. P. – VR or AR for projects linked to industrial simulators are also use cases that have enabled us to improve our approach to immersive, to develop technologies, and this feeds our artistic projects with other challenges, but just as much technicality and reflection on what immersive audio should be.
An overview of immersive sound
V. P. – This is a sector where there’s still a lot of room for improvement – not least in terms of standards. The technology exists, and so do the tools. But when it comes to broadcasting, at least, things are still unclear. Dolby is advancing fast enough to impose itself in production, but the heterogeneity of broadcast equipment prevents us from finding a single model that would allow us to work more serenely on projects. In reality, we can already do everything with the software at our disposal. We can work on effects, realism… But we still need the resources that go with broadcasting. VR has made an extremely positive contribution to the development of audio post-production tools.
V. P. – We’re often the last wheel on the budget, depending on visuals, VFX… Sound remains the poor relation, often in the home stretch of a production. But in VR, sound is essential! There is a growing awareness of this point, and we hope to find a more consistent place on productions to get a real result. The arrival of experiences such as NOTES ON BLINDNESS: INTO DARKNESS from the outset, with a particularly qualitative approach to sound, has helped to bring our professions to the fore. This is undoubtedly also a legacy of video games…
V. P. – The technological race around VR is also restrictive. Virtual reality is a field where we have to redo masters every two or three years, depending on the new headsets that come onto the market! We have to anticipate the discontinuation of certain devices, the replacement of certain plugins… Some projects are revisited several times for this purpose, even though they are still being released. There’s a real challenge here, with portable, cross-platform, time-agnostic broadcast formats. We’re not there yet, as manufacturers are not working together on this point.
Immersive sound = binaural sound?
V. P. – We’re working hard on binaural sound and convolution reverbs (a digital simulation of the reverberation of a physical or virtual space). In reality, each viewer is unique (in terms of his or her HRTF profile – or head-related transfer function), even if 3D sound can be heard by everyone. But we can’t yet fully reproduce the specific listening characteristics of each individual. If you’re an average listener, fine: you’ll hear in detail the work done by the audio teams, based on average profiles. But if your HRTF profile is out of the ordinary (inter-aural space, horn shape, etc.), then you lose precision. In headphone audio, this is the battle of the moment: recognizing the listener’s physical characteristics so that each listening experience is optimized. Today, you can scan your head via an application, and be offered listening devices dedicated to each HRTF (“profiling”).
V. P. – At the same time, we’re looking to create credible virtual acoustics from footprints (impulse responses of physical environments). We can capture and virtualize the acoustic imprint of an environment (like photogrammetry of the room, but in audio) in order to inject raw sound sources. In this way, sound can be injected and spatialized with extremely disturbing realism. This permanent “matching” between sound and environment is a real challenge for experts. As for visuals, we’re not far off! A good example of the use of impulse responses is still WOLVES IN THE WALLS – but it consumes a lot of power… And this brings us back to the autonomy of headphones, and the CPU power allocated to audio.
Leave a Reply
You must be logged in to post a comment.