AudioCortex for AudioFinder
Coming soon! I have been working on this feature for years. What is it?
The biggest problem is we all have is trying to manage millions of sounds with generic names like “sound123.wav” and/or no embedded metadata to describe the sound. How do you know what it is unless you listen to it? How can you listen to it if you have millions of sounds? How many years would it take to play all your sounds? AudioFinder can find all your sounds, but up until now unless they had embedded metadata or useful file/folder names it didn’t know what they were sounds of. This is the holy grail of audio sample management, the quest I have been on for a long time.
It’s taken me years to get the right machine learning algorithm working and I think I finally have it solved. Now I’m busy training it to be able to auto classify sounds into useful categories.
And what should those categories even be? To start off with it will do obvious things, like drum loops, pianos, guitars and etc… Over time I hope to get precise enough to tell even more about the sound.
How will it work? It will be part of AudioFinder and the feature is called AudioCortex (the brain). It will require Mojave or newer, sorry the older OS versions are not up to the task. It does need a fast computer as well.
Because this is radically new way to find sounds, the UI for it in AudioFinder is going to have to evolve overtime as we develop new workflows.
This is the big feature I’ve been hinting about for a few years and the reason why I’ve decided to spend most of my coding time devoted to this instead of other things.
I’ll update you all as I get closer to releasing it.
You might ask how much is the upgrade fee? It will be free like every AudioFinder update since 2003! You will not find another product for macOS that has lived since 2003 and never charged a single upgrade fee.