Atomic Tonocracy (Inc NAM support)

Uh, seriously? That's downright bad.
I'm quite aware of some shortcomings of the iPad remote idea, but I didn't know it was that bad.
Yeah it works fine most of the time if you consider it a secondary display and nothing else. But the moment you'd like it to also be a touchscreen or your primary screen, nope.
 
Yeah it works fine most of the time if you consider it a secondary display and nothing else. But the moment you'd like it to also be a touchscreen or your primary screen, nope.

Bah, really didn't know that. I always thought it was supposed to work more or less flawlessly (because, well, even if I never believe in ads, these kinda things usually work at least more or less well in Apple land).
 
For me using a laptop on stage would not be a huge problem. I've seen several Synthwave bands rocking that sort of rig even for guitar tones and it was fine. I'd treat it sort of like a rack rig - computer and audio interface out of the way, MIDI controller somewhere closer to the front of the stage.

I don't think remote controlling a Mac Mini with an iPad would work that well. I sometimes use MacOS Sidecar feature to use an older iPad Pro as a second display and it can be flaky - sometimes it gets stuck after return from sleep or might fail if the connection is not good. And it does not fail gracefully at all! A big problem with it is also that you cannot do anything but scroll with your finger. Clicking on anything requires using an Apple Pencil. I know, it's insane, right?
The only way my ipad pro isnt driving me nuts is as logic remote.
Ive used to use v control and that was fine when it was a 50 bucks app but now it seems beyond broken.

As for laptops…i did a tour in 200 with MBP, Apogee Duet and Emma overdrive and had less hiccups i had on subsequent tours with hardware modelers.
 
Bah, really didn't know that. I always thought it was supposed to work more or less flawlessly (because, well, even if I never believe in ads, these kinda things usually work at least more or less well in Apple land).
Apple stuff is honestly full of "it just doesn't work", especially when it comes to external display stuff. One of my monitors regularly refuses to wake up from sleep on the HDMI port when the other one (same model) over a USB-C -> Displayport adapter is usually fine. My USB devices sometimes have to be unplugged and plugged back in too, just the cable on the Mac end. This on the latest and greatest Macbook Pro 16" M2 Max. Never a single one of these issues on my desktop PC running Windows 11.

I like MacOS generally over Windows, but it's certainly no picnic either.

But I think we are getting way off topic here.
 
My USB devices sometimes have to be unplugged and plugged back in too, just the cable on the Mac end.

Yeah, I know that one inside out. Sometimes I even have to reboot to bring the USB ports back. Doesn't get any better with an additional USB card (*check*), more to the opposite. But then, I'm on an old Mac Pro, so you'd expect USB to be vastly more robust today, especially as they've ditched FW years ago already (and maybe TB too, one day at least).
But I think we are getting way off topic here.

True. Well, maybe not *that* much OT, as this might be a viable candidate to run on a laptop rig.
 
I was thinking the 'hardware' options Atomic Tonocracy would create would be primarily 'player' hardware that could host captures and effects have banks of presets, editing on board, etc. A desktop version with finger friendly controls and a floor model that is more stomp control oriented. Size similar to Quad Cortex.

So both devices with USB interface functionality but not necessarily have capture ability, or at least not without being connected to a computer.
Adding control switching and midi control, expression control, input impedance mojo, etc shouldn't create a code problem.

So where does the code problem surface in that hardware scenario? Is the stand alone player I'm describing requiring more processing than a typical multi-effect modeler? Pushing it into multi thousand$ territory? Or am I leaving out some functionality you guys are imagining would be a part of the design that enters into the problem?
 
Last edited:
I was thinking the 'hardware' options Atomic Tonocracy would create would be primarily 'player' hardware that could host captures and effects have banks of presets, editing on board, etc. A desktop version with finger friendly controls and a floor model that is more stomp control oriented. Size similar to Quad Cortex.

So both devices with USB interface functionality but not necessarily have capture ability, or at least not without being connected to a computer.
Adding control switching and midi control, expression control, input impedance mojo, etc shouldn't create a code problem.

So where does the code problem surface in that hardware scenario? Is the stand alone player I'm describing requiring more processing than a typical multi-effect modeler? Pushing it into multi thousand$ territory? Or am I leaving out some functionality you guys are imagining would be a part of the design that enters into the problem?
All that would be feasible. Only the capturing itself is processor intensive, the player part is not.

It's basically like the Quad Cortex except you need to use your computer for captures. The device itself could still be used as the audio interface for doing the signal for the captures though. I would not be surprised if NeuralDSP themselves offer something like this in the future, when they are done with far more important things. Hell, it could probably send the data to the cloud wirelessly and then you get a notification "your capture is ready and installed!" when you fire up the QC again later.
 
Hell, it could probably send the data to the cloud wirelessly and then you get a notification "your capture is ready and installed!" when you fire up the QC again later.

Something like that would be absolutely cool - I mean, most often than not, for me all these cloud based things serve little or no purpose others than making things less accessible or people trying to sell you whatever it might be. But in case you'd just capture the raw data and then just uploaded it without the learning process clogging up your modeler or computer, that'd be quite a valuable service.
 
Something like that would be absolutely cool - I mean, most often than not, for me all these cloud based things serve little or no purpose others than making things less accessible or people trying to sell you whatever it might be. But in case you'd just capture the raw data and then just uploaded it without the learning process clogging up your modeler or computer, that'd be quite a valuable service.
Yeah and it could be a pretty great tool for capture makers as well if you could just keep firing the data into the cloud and not have to wait for each capture to process individually. It would become a "change settings -> record beeps and boops through the system -> send to cloud for processing -> repeat with different settings -> get captures back whenever they are ready."

Now that I think of it, the capture process in every device is actually a bit stupid. It should be basically 4 steps:

1. Setup your gear.
2. Record X captures, changing settings in between.
3. Batch process captures.
4. Add metadata to captures.

But now it's that process for every capture individually, with a significant wait for each capture to process.
 
Now that I think of it, the capture process in every device is actually a bit stupid. It should be basically 4 steps:

1. Setup your gear.
2. Record X captures, changing settings in between.
3. Batch process captures.
4. Add metadata to captures.

But now it's that process for every capture individually, with a significant wait for each capture to process.
I think one aspect that needs to be changed going from Kemper/QC “5 minute” captures to ones that take over 15 is that they should include some DI’s to record the amp in its present state to A/B with. If there is a batch of 5 captures queued up all with different settings, it’s basically impossible to A/B and verify they’ve come out successfully
 
Thanks to the community for the constructive feedback on Tonocracy. We are listening and appreciate it.

Captures made using the free version will now be private by default, matching the behavior of the full version.

There is also updated documentation on the Tonocracy website including a new quickstart guide, a more detailed description of what's included in the free vs full version and more.

Get v1.01 of Tonocracy and check out the Quick Start Guide

As we've mentioned, this is just the beginning for Tonocracy. We look forward to developing the platform rigorously and drawing inspiration from customers and community members.

-TK
 
Yeah and it could be a pretty great tool for capture makers as well if you could just keep firing the data into the cloud and not have to wait for each capture to process individually. It would become a "change settings -> record beeps and boops through the system -> send to cloud for processing -> repeat with different settings -> get captures back whenever they are ready."

Now that I think of it, the capture process in every device is actually a bit stupid. It should be basically 4 steps:

1. Setup your gear.
2. Record X captures, changing settings in between.
3. Batch process captures.
4. Add metadata to captures.

But now it's that process for every capture individually, with a significant wait for each capture to process.
You do know that this is EXACTLY what Tonocracy does, right? :)
 
Another suggestion:

make the process of entering metadata faster and easier. The more data that is included, the more useful each capture is (and it means that cloud subscription is getting the most efficient use).

Having some kind of smart options where you can select recent names/import data from previous/increment numbers/autocomplete words/colour code/images/suggestions.

The easier (and more fun) it is to enter this data the more useful it is. A cloud of 10,000 models is only as useful as the way it’s stored and how easy it is to find what you’re looking for. Being able to do this during the training process is ideal because that's time you're otherwise twiddling your thumbs in.

This is why it would also be good to be able to group models together in some way
 
Last edited:
Captures made using the free version will now be private by default, matching the behavior of the full version.
this is a great move and I believe it will work in your favour still. well done for listening and reacting.

if a user makes (say) 50 models with the free one - they have a massive incentive to unlock the software to actually be able to use them freely. The other limitations are enough of an incentive to buy the software.
 
LOL
 

Attachments

  • TCSG.jpg
    TCSG.jpg
    301.9 KB · Views: 52
All that would be feasible. Only the capturing itself is processor intensive, the player part is not.

It's comparatively less processor intensive, but NAM in particular is still very taxing on DSPs and SoCs in general. Every hardware solution out there supporting NAM can only handle so-called lightweight models ("nano" weights), and eat up pretty much all available DSP.

Tonocracy allows running multiple NAM models on a single preset, so i wouldn't hold my breath on getting a hardware implementation anytime soon. We're only recently starting to see cheap hardware solutions with dedicated hardware for neural networks (NPU cores).

I'd looooooooooooooove for @atomicamps to prove me wrong though :LOL: And hell, a box supporting just modeling+FX, which sound pretty damn good on Tonocracy, would still be a winner.

It's basically like the Quad Cortex except you need to use your computer for captures.

Keep in mind that the Quad Cortex, Kemper, Headrush Prime et al are not AI-based solutions - no matter how much certain companies would like you to think otherwise.
 
Last edited:
It's comparatively less processor intensive, but NAM in particular is still very taxing on DSPs and SoCs in general. Every hardware solution out there supporting NAM can only handle so-called lightweight models ("nano" weights), and eat up pretty much all available DSP.

Tonocracy allows running multiple NAM models on a single preset, so i'd hold my breath getting a hardware implementation doing the same anytime soon. We're only recently starting to see cheap hardware solutions with dedicated hardware for neural networks (NPU cores).….
I this why I have to set the buffer in Tonocracy to 3 or 4 times higher compared to other amp sim software? It still low enough to be not a problem but on complex presets it would give little blips of noise if I don’t bump it up.
 
Captured my old Soldano X88R into a VHT 2150 Power amp. That straight into a 72 Marshall
greenback loaded cab. Recorded with an SM57. nothing done to the track but normalized and
some plate added in my DAW.



I hope it will play for you guys. I can't get it to play on my computer.
The playing isn't much, but "I" dug the tones!
 
so I was chatting to @[Nathan] about ToneX struggling with high gain models and figured I'd test for myself. This is 5150 Red Channel with a maxon od9 in front, suhr load.



I think they all sound pretty close, to have some kind of measurable way of comparing I flipping phase and lined them up based on where I could get the biggest null. In this instance, NAM had the biggest cancellation, followed by Tonocracy and then ToneX. Can't really draw any conclusions besides how they sound, which to my reckoning is they're all absolutely fine. Thought it may be of interest to show.

The model I made for this test is on the Tonocracy cloud because I'm on v1.0.0. and had no way of not sharing :ROFLMAO:
 
Last edited:
Back
Top