The First Video Demo of AudioCortext

AudioCortex is a new deep learning sound browser for macOS. Trained on over 100,000 it can analyze sounds that have no metadata or other information and place them into logical descriptive categories.

Imagine you have a million sounds on your computer and don’t know what they sound like, how can you listen to all of them? You can’t do the math: 1 million sounds + 1 second of listening ==
11 days 13 hours 46 minutes 40 seconds non-stop!

AudioCortex uses machine learning to do the listening for you. Using a simple drag and drop interface it will save you hours, days, weeks, months or years of time depending on how many sounds you have.

This is quick demo is of the pre-release version of AudioCortex, the expected release date is early September 2019.


Pretty amazing. Will this be part of AF at some point or its own separate app. And How would one go about storing categorizations?

Also it’s obviously early for suggestions but… AF’s achilles heal is when you either have multiple machines with different folder structures or drive names. Everything has to be done manually. I’m hoping this will be able to embed metadata… (And Hoping AF eventually has a way to deal with users who have portable and stationary machines with different file structures.)

It will be free with AF but not inside of it. AudioFinder supports old versions of the macOS and it’s impossible to build this inside of it. It’s not going to have a database. It’s designed to be used in real time and not have all the baggage of tagging and databases.

I originally was against adding that a database to AudioFinder but everyone requested it. It’s not a problem that can be simply solved for multiple users without some kind of server.

Also my responses to user feature requests have made AudioFinder to hard to use for some. AudioCortex is going to be simple and stay simple