Next up Video Tutorials

Now that Big Sur support and AudioCortex are out, I’m going to turn my attention to producing some tutorials. Perhaps the most long request thing every.

Please reply with the kind of tutorials you’d like to see

1 Like

This would be awesome. I have dyslexia so retaining anything I read in a manual for an extended period’s a challenge to out it mildly.

Maybe starting with some ‘tips’ videos would be great. Like stuff that maybe you’re frequently asked about that isn’t super obvious.
E.G. If there are any search tips that aren’t obvious, like combining more than one search term.

It would also be interesting to see how you use AF. (This is where manuals fall short… They often explain all of the features but don’t explain the correlation between other features or the context in which some features are best taken advantage of.)

It’s important to take into account AudioFinder’s long history. It was the first macOS X native sample manager ever released. I’ve been at this a long time. After 2010 AudioFinder started reaching the limits of what’s humanly possibly to manage. What I mean by that is think about 20 years ago, how big were hard drives and how many sample libraries where there? AudioFinder became more complex and tricky to use because the number of sounds in the world to manage grew exponentially. About 5 years ago I decided it was out of hand, there was no database or tricks I could add that would scale to the task at hand. You need to computer to do the filtering for you based on AI to go further. At that point I stopped just adding random feature requests into AudioFinder because it’s already got too much stuff in it and the name is Audio “Finder” after all. The Finding part was breaking down at scale. People have misinterpreted this as lack of attention of my part, even tho I have been posting messages that I was working on a big new feature for years. I saw little point of doing anything other than fixing the Finding feature in AudioFinder.

The way I always used AudioFinder is for lack of a better word randomly. What I mean by that is I scan all my sounds and then use the random feature to pick sounds that I may or may not use in a project. If you have 4000 snares for instance, your brain won’t remember them all even if you listen to them. But what if you don’t know they are snares? They are called sound123.wav? Then what?

I first wrote it almost 20 years ago when my hard drive started filling up with samples and I had no easy way to play them. When I got too many sounds I added the database to allow tagging them and that was good for a while. However, over time the problem got much more complicated because I eventually ended up with millions of sounds. So my random workflow broke down at that scale.

This is why I’ve spent the last few years developing an AI system, because it can be used to break sounds down into broader categories and then in the broader categories I have a better change of finding the sound I need. Manually tagging sounds breaks down at scale because it’s not possible to listen to all the sounds I have and then tag them. I don’t enjoy the chore of listening to a sound and tagging either, it’s just a complete waste of energy. This is why for the last few years AudioFinder’s feature set has remained fixed as I experimented and developed this AI, there’s little point in perusing a manual tagging database, it doesn’t scale into the future.

The AI in AudioFinder is completely unique, I invented it. It’s not copied from some other product. Perhaps someone will come up with a better approach, but at this point, all this is brand new territory.

It was incredibly time consuming to train my AI. I had to to listen manually to one million sounds an curate them into usable broad categories. It’s getting to the point now that I can use the AI to train the AI further. Is it perfect? Not yet. Is it useful now? Yes, that’s why I’m releasing it. But, it’s not done, it will continue to be developed and improved and maybe it’s never done, it just evolves forever.

How I use AudioFinder now is I toss everything into AudioCortex and let it decide what it thinks the sounds are and then I filter them there. I believe the future for sample managing is not managing your samples, they are managed for you by AI.

I’ll make tutorial on his this new workflow works. In general I do this, I’ll open a folder or two full of sounds and dump them in AudioCortex and then I’ll filter the sounds in there based on what I’m looking for, like “Snare” and 8 out 10 times AudioCortex can correctly identify a snare.

If you had a time machine you’d see AudioCortex is similar to AudioFinder of the mid 2000s. Except instead of going into the direction of databases and tagging, it is all-in on AI. I’ve gone back to the roots of AudioFinder making finding sounds easy.

2 Likes

Ok, so I am working on a video production workflow. Finally sorted out a good screen recorder.
TBH: I am not a YouTuber, I don’t feel comfortable talking or being in videos, it makes me feel like an idiot. I’m doing this by popular demand, feels worse than having to write a book report in high school.

I hear you. I write the scripts and send them off to a voiceover artist. I do everything but the speaking.

i should do that too it’s a good idea

Thanks Iced for for this insight.
AudioFinder is far from being perfect but I stick to it mainly because once you’ve tagged/database thousands of sounds you don’t want to re-do the job in another software.
I hoped that the embedded tag option you added a few years ago would develop… but hey that’s fine.
I think for some of us your “working like crazy on a big feature” for many years sounded like an excuse for not improving the program (for any other reason too personal for example), at least that has been my thoughts several times… so that’s amazing to finally see the results of your hard work!
It’s kind of funny to read that you were too annoyed by the idea of tagging all your sounds, and that you ended up categorising one million sounds to train/develop Audio Cortex and workaround this :slight_smile:
I’m still on older OSX versions so I can’t use AC yet, but I’m really eager to try it. This AI stuff is just used everywhere as a selling point this days and I’m usually quite not interested by it for this reason, but your AC explanation/justification totally convinced me! You’re probably fucking right! Future will tell.
I wish I could give a hand on your video tutorials, but I don’t think you need any help for audio editing and/or mixing, and that’s all I can offer unfortunately.
Keep up the hard work :slight_smile:

Thanks! When we talk about embedding stuff in audio files, what people don’t understand is that you cannot do that without re-writing and changing the audio files. Sounds harmless maybe? But dig into it and you discover that there is no standard for audio files that is universally adopted in all apps on all platforms. When I explored doing embedding for real it was before I added the database, I chose the database option because it was the least likely to result in a catastrophe. What kind of catastrophe? Imagine for a minute AudioFinder was 100% successful in embedded tags in audio files and you edit those files in another app and it removes the tags. You’ve just lost your effort. Another problem is some apps like ProTools won’t recognize files if they change even slightly and they become broken projects. I wish it was reliable and simple to embed stuff in audio files, it would have made everything easier. But it’s not anywhere near failsafe. I therefore went with the database option that doesn’t change the files, it fingerprints them in the database and then looks up metadata about them in the database and this works well. But when you move the files, the database doesn’t know you moved them and then the connection between the database and the files is broken. To solve that I added the Metadata Database Manager that can re-link them. But it’s hard to use. I didn’t like either other these options. I’m searching for an option that requires little effort on your part. I was in this dilemma for several years, because I just see embedded tags and the database as lousy options, they are too fragile. That’s when I started researching machine learning and iterating on different approaches. I saw how Google was getting really good at image search with AI so I knew the algorithms existed. But there’s no road map for how to do this. I had to try and fail and try and fail and try and fail until I got to the point with the AI where I felt this is usable. It’s not perfect, I’m not claiming it’s perfect. But it is trending in the direction where I see the most promise and as the algorithms get better it’s eventually going to be the way it’s done. Why not? Isn’t it logical to conclude that computers should be able to use AI to identify sounds? The thing too is I could not release the AI until I felt it was getting close, because if it’s even slightly janky people won’t give it a chance.

I have said before, but people don’t realize, AudioFinder is just myself. I have no employees, I don’t charge for updates. I’m not trying to be a millionaire or build a software empire. Part of why I don’t do so much messaging on the forum is 99% of tech support happens with people via email. By the time I’m done with that work each day I don’t have the energy to to do all over again here.Tech support is very expensive because 2 hours of emails vs 2 hours of coding is a loss. This gives people the impression I’m not paying attention, but the opposite is true, I’m just super busy.

I need to make a tutorial that explains AIFF and WAV formats to people. If people understood how fragile these formats are they’d appreciate the AudioFinder approach.

One other wrinkle happened this year in my plans because ARM based Macs are coming. In order for AudioFinder to support those natively I have to break backwards compatibility to the older OS versions. That’s a super annoying and I’m still working out solutions there. Expect those ARM based Macs to eventually be a demarcation line for most apps. Many current apps will never make it over the transition. In any case I have a bunch of time consuming unexpected work to port AudioFinder to work on ARM natively. In doing that, people will think “oh no new features he’s not doing anything” when in actual fact just to stay current I have to do a bunch of stuff over.

Anyhow fair warning ARM Macs are going to slow some other features I was planning down because I have this new bunch of work to do — thanks Apple.

Hey Iced, thanks for your answer.
I wasn’t expecting any more justifications on all the stuff you have to manage on yourself, you already detailed it well. I just shared the doubts I had at some point in the past. I had no proof to believe in this doubts so I didn’t stick with them. If I’ve mentioned this is because I can imagine some people other people might also have had such doubts and ditched out AudioFinder (I hope not!).
The same goes about the embedded tag topic and you already gave great informations about what it’s not a good solution.
Basically, I was chating about all this stuff just to say that I really like you POV on these topics and that it totally justifies AudioCortex. (ans english isn’t my native language, so I might say things awkwardly)

And now ARM macs, yeah :confused:
We’ll see the benefits in few a years I’m sure, but that’s one more quite radical transition that’s slowing down everyone’s job.
I love AudioFinder and I’m perfectly fine with its current state :wink:

Hehe, not so much justifications – I’ve leaving these comments here, because I want people who read the forum when I’m not paying close attention to it to know my thinking.

Being a solo indie developer, I just cannot always read this forum, in my absence theories and disinformation spread. Behind the scenes, I’m talking to users constantly every day. I wish I could find a volunteer to help me keep an eye on this forum when it’s hard for me to keep up.

1 Like

@Iced I’ve needed a solution like this for years and stumbled upon AF, what a blessing.
I’m yet to get my head around it as it covers so much. I just wanted to take a moment to say THANK YOU…!! I teach Logic Pro at a sound school here in Sydney Aus and will definitely promote it to all the students.

Thanks. It’s overly complicated because it supports dozens of workflows. I still am trying to get to making tutorials, but Apple surprised me with some extra work via M1 and Big Sur.