Boss GM-800 and GK-5

I'm only confused why anyone would want these translated via MIDI or anything else for that matter???

Because they're part of the guitar's articulation vocabulary.
Not that they'd make much sense all the time - but it also doesn't have to be the full enchilada. On a smaller scale, slurs and double bends are pretty much the same thing - and yes, in case it was possible, I defenitely wouldn't want these to be ported into the land of MIDI. I spent a great deal practising articulation on the guitar and don't want this to be lost. Which, fwiw (and as said before), is why I absolutely dig Roland's work on the VG and SY series - because these actually take guitar articulations into account.

And again and also as said: This is the fundamental difference between guitars and keyboards when used as MIDI controllers.
Pretty much anything you do on a keyboard (no, we don't need esoteric examples of people playing prepared pianos with spoons and what not) translates intp MIDI 1:1. Heck, even if you are a classically trained pianist, you don't need to adjust much (if any) of your core playing technique.
On the guitar however, not only are tons of informations lost, no, they're even causing unwanted side effects, so, as a result you have to adjust your playing technique quite a bit (or just get along with whatever it might be). In addition, the converter units are better off in case they're allowing for some pre-filtering before the signal hits your DAW or synth. There's "data thinning" options for a reason. I still remember the memory overloads on the Atari ST. Without careful playing and data thinning, you could simply run out of note/data memory back in the days. None of that would ever happen with a master keyboard.
 
What's the solution for the guitarist getting into guitar synth today?

Obviously the device discussed in this thread (if you really wanted "true" synthesis from the ground). Or any other similar device for the matter.
But if I was to build a synth-alike guitar rig, I'd likely go for a SY-1000 (maybe even combined with a VG, but these are really showing their age IMO) or get even more adventurous by tweaking the split up hex pickup signal myself (a nightmare to keep track of, but these days there might be better tools to organize the 6 audio streams and possibly tweak them simultaneously if needed).
 
I am trying to figure out when Midi was first developed, why they called it Midi (musical Instrument Digital Interface) and not just call in KDI (Keyboard Digital Interface).
Was there some foresight back then that this tech would also apply to other types of instruments too ?
 
My pedal board will be
GK > Primova GX-2 (Gk modulations and string routing to 2 Gk outputs).
SY-1000 (with favorite analog preamp/dirt pedal in loop).
GKC-AD converter to GM-800.
Small diy mixer for 2 x stereo outs to pa and mono mix to stage amp.
 
Also name me one of roland / boss guitar synths where you can layer 4 parts?
The 1990's Roland GR50 could layer 4 parts. These were known as as partials and they could consist of PCM sample stippets and /or a subtractive synthesizer algorithm.
Roland has never created a ZenCore based guitar synth prior to the GM-800.
Are those sounds glitch free, like on the low E string? Please tell me you've never had a tracking error with the GM800.
Fishman doesn't really cut it either, as already outlined clearly in this thread.
I've read that many people still prefer the fishman triple play over the GM800, and I believe the latency measurements favor the triple play. And remember that the triple play is triggering any external synthesizer you want including the actual analog synthesizers that the gm80O only offer as sample snippets.

it's good enough for John McLaughlin, but even he knows it's not perfect.
 
The 1990's Roland GR50 could layer 4 parts. These were known as as partials and they could consist of PCM sample stippets and /or a subtractive synthesizer algorithm.

Are those sounds glitch free, like on the low E string? Please tell me you've never had a tracking error with the GM800.

I've read that many people still prefer the fishman triple play over the GM800, and I believe the latency measurements favor the triple play. And remember that the triple play is triggering any external synthesizer you want including the actual analog synthesizers that the gm80O only offer as sample snippets.

it's good enough for John McLaughlin, but even he knows it's not perfect.

GM-800 isn't perfect and little slow on the low E and A but as far as bloops and bleeps and glitches like the gr-55 and other past ones, nope! Once it's set up right I don't get that at all.

Ok so maybe the gr-50 could do partials, but I bet it couldn't layer 4 parts anywhere it wanted too on the fretboard like the gm-800 can? :) Technically 5 parts if i wanted to add the drum partial... And yes I know triple play can do that but I'm talking boss / roland.
 
Because they're part of the guitar's articulation vocabulary.

Serious questions. If pick scrapes, dive bombs, pinch harmonics, and alike were able to be communicated via MIDI (or any other protocol for that matter) to a synth, what would they be used for? What would they translate into? Would they open or close the filter? Increase the resonance? Create a controlled pitch slide? Generate some modulation of vibrato, tremolo, additive harmonic overtone content, or something else? Change the envelopes and how they react? Generate new notes? How would assign such a source of control to a modulation destination within the synth and what range of effect would it be? What amount of bit rate resolution would be needed to effectively make it all work without clogging up the data stream and not getting zipper like reaction to the end expression that is desired? Lastly, what types of synth sounds would benefit from those guitar-centric playing styles?
 
I am trying to figure out when Midi was first developed, why they called it Midi (musical Instrument Digital Interface) and not just call in KDI (Keyboard Digital Interface).
Was there some foresight back then that this tech would also apply to other types of instruments too ?

You can tinker about that all you want - but my "musings", if you will, aren't even about that. It's all about factual things. Pitch-to-MIDI, or rather "string to MIDI" (which is adding quite some further issues) is vastly inferior inherently compared to the super controlled environment of a key sending simple on/off messages plus velocity data. Because that's all that's happening.

Apart from that, it doesn't even need to be mentioned that MIDI serves a lot of other purposes than just transfering keyboard note data.
But that still doesn't make it any more suitable for whatever Guitar-to-MIDI adventures.

In fact, it's not even the fault of MIDI that it doesn't work all that well with guitars, it's rather the guitar itself, which simply wasn't designed to throw out electric control messages (whereas that is what keyboards are built around, even the oldest ones). If it was, it'd possibly look like a Synthaxe or at least feature sensing frets, so you wouldn't have to deal with the burden of having to extract pitch data out of what is an incredibly complexed source signal.
 
Serious questions. If pick scrapes, dive bombs, pinch harmonics, and alike were able to be communicated via MIDI (or any other protocol for that matter) to a synth, what would they be used for? What would they translate into? Would they open or close the filter? Increase the resonance? Create a controlled pitch slide? Generate some modulation of vibrato, tremolo, additive harmonic overtone content, or something else? Change the envelopes and how they react? Generate new notes? How would assign such a source of control to a modulation destination within the synth and what range of effect would it be?

Simple answer (still absolutely serious): Anything you like could basically be possible. As is, nothing is possible.

What amount of bit rate resolution would be needed to effectively make it all work without clogging up the data stream and not getting zipper like reaction to the end expression that is desired?

Data resolution shouldn't be an issue. You can go all mad with synths and record tons of controller information on the same track, all transmitted at once to whatever will create the final sound.
It's all (!) about the actual conversion process. That very process needed to be able to identify the differences between smooth glides and heavy slurs. It needed to identify dead notes as such and not as anything with an assigned pitch (which is what all systems right now are doing). Etc.
In other words: Anything not exactly sounding like a clear note when you play your guitar would have to be translated into controller data (because there's no "advanced note parameter" subset in the MIDI protocol).
As long as these things can't be done (and IMO we're faaar away from anything like that to happen, especially given that all this is a super niche thing anyway).

In addition, the receiving side would likely have to be adjusted to make use of that additional data, too - but on many synths that could be done already, you can basically use any controller information to control anything through direct CC routing, modulation matrixes and what not.

Lastly, what types of synth sounds would benefit from those guitar-centric playing styles?

Anything you want.
In fact, even if you didn't explicitely put any of that additional controller data to dedicated use, everything would still profit. Because in case the guitar signal could be dissected like that (which, as said, I doubt will happen any day soon), the additional information could as well be filtered out a lot easier, resulting in a cleaner signal to arrive at your synths MIDI in.
As an example: You're playing some funky rhythm stuff, including lotsa dead notes. Exactly the kind of stuff that Guitar-to-MIDI is translating pretty badly. Now, in case our super hypothetic new Guitar-to-MIDI converter could dissect the clean note data from the mutes and turn the latter, say, into CC data, you could just ignore that on the receiving end and only use the clean chord data for whatever synths. Could for instance be a 4 part horn section, typical drop 2 chords as used often in funky guitars translate to horns extremely well - and all of a sudden, your funky guitar would be doubled by a horn section (which can sound quite great, been doing that kinda stuff already...) - in realtime, simply because all those pesky dead notes would be ignored by the horn section (which is the trick to make this work).
On other patches you may want to explicitely make use of that additional data. Percussive dead notes could surely work great on any synth patches sounding a bit, say, clavinet-ish (on a realistic sounding clavinet patch they likely wouldn't, but who knows...). But for that to work properly, the converter would have to translate them into additional CC data, too - which you could then use to control, say, an envelope, just so the mutes you play have their sort of equivalent on the synth used.
 
What amount of bit rate resolution would be needed to effectively make it all work without clogging up the data stream and not getting zipper like reaction to the end expression that is desired? Lastly, what types of synth sounds would benefit from those guitar-centric playing styles?
The original midi 1.0 spec as we all know, is limited to 31250 bits per second, so you quickly reach the data limits with lots of 'un-thinned' note and controller data.
So come midi MPE specification and midi 2.0 spec for high transport bandwidth.
I was hoping Roland/Boss would implement this when the SY-1000 was released,
but until when manufactures all come on board with Midi 2.0 and MPE, then we might see more expressive Guitar to Midi converters.
With midi 1.0 spec, all note and controller data resolution is 7 bit, and sometimes 14 bit for a few cc# like pitch wheel/bend,
and it doesn't matter if it is keyboard keys or guitar to midi, the resolution is the same, although guitar to midi can at times (depending on hardware and setup) give more expressive control with an envelope follower to real-time adjust note velocity and pitch-bend (that is putting to one side the ADSR envelope of a synth tone, the sustain of strings can affect the note level as the strings fade out, where key will hold the note at a constant level until released).

Usage cases will vary wildly for different users, for myself, the GM-800 synth is not going to be my core tone, it is about the whole sound with all the other bits mixed in, I don't care for internet opinions that tell me I'm wrong, it works for me, and I am happy with what ever limitations the current technology has, it can only get better in the future.

I am also grateful to have be born in to these amazing times of technology, and not still beating a stick on rocks for entertainment.

1o2ju7.jpg
 
Simple answer (still absolutely serious): Anything you like could basically be possible. As is, nothing is possible.



Data resolution shouldn't be an issue. You can go all mad with synths and record tons of controller information on the same track, all transmitted at once to whatever will create the final sound.
It's all (!) about the actual conversion process. That very process needed to be able to identify the differences between smooth glides and heavy slurs. It needed to identify dead notes as such and not as anything with an assigned pitch (which is what all systems right now are doing). Etc.
In other words: Anything not exactly sounding like a clear note when you play your guitar would have to be translated into controller data (because there's no "advanced note parameter" subset in the MIDI protocol).
As long as these things can't be done (and IMO we're faaar away from anything like that to happen, especially given that all this is a super niche thing anyway).

In addition, the receiving side would likely have to be adjusted to make use of that additional data, too - but on many synths that could be done already, you can basically use any controller information to control anything through direct CC routing, modulation matrixes and what not.



Anything you want.
In fact, even if you didn't explicitely put any of that additional controller data to dedicated use, everything would still profit. Because in case the guitar signal could be dissected like that (which, as said, I doubt will happen any day soon), the additional information could as well be filtered out a lot easier, resulting in a cleaner signal to arrive at your synths MIDI in.
As an example: You're playing some funky rhythm stuff, including lotsa dead notes. Exactly the kind of stuff that Guitar-to-MIDI is translating pretty badly. Now, in case our super hypothetic new Guitar-to-MIDI converter could dissect the clean note data from the mutes and turn the latter, say, into CC data, you could just ignore that on the receiving end and only use the clean chord data for whatever synths. Could for instance be a 4 part horn section, typical drop 2 chords as used often in funky guitars translate to horns extremely well - and all of a sudden, your funky guitar would be doubled by a horn section (which can sound quite great, been doing that kinda stuff already...) - in realtime, simply because all those pesky dead notes would be ignored by the horn section (which is the trick to make this work).
On other patches you may want to explicitely make use of that additional data. Percussive dead notes could surely work great on any synth patches sounding a bit, say, clavinet-ish (on a realistic sounding clavinet patch they likely wouldn't, but who knows...). But for that to work properly, the converter would have to translate them into additional CC data, too - which you could then use to control, say, an envelope, just so the mutes you play have their sort of equivalent on the synth used.

May I ask, have you ever created from scratch your own new patches in a complex synth like ZenCore or even a simple one like a MiniMoog? Much, if not almost all, of what you are hoping for isn't available within these synths and that has nothing to do with MIDI.

Look at the knobs, switches, and controls on this MiniMoog and consider where the desired guitar-centric applications may be applied. There isn't much realistically to go with at the end of the line that isn't already available without all that guitar stuff.

iu


ZenCore/Zenology isn't much more accessible in terms of usable and/or useful destinations to control in the way desired. Most of the basic complexity in ZenCore (and also other complex synths) come from the basic building blocks that generate the core sound. That is, the waveforms of the oscillators or samples and their cross modulation, filtering types, envelope settings, and effects. All of these available parameters for end modulation are already driven by note number, velocity, pitch bend, mod wheel, aftertouch, and/or expression pedal. Obviously, a guitar doesn't have a mod wheel or aftertouch. But, the GM-800 does have control switches and expression pedal inputs that can be assigned to these as desired.

iu


Point is, what can be done within a synth already exist with current tools. It is conceivable new synths or upgrades could apply guitar-centric controls. But right now, there is no there... there. Again that doesn't have anything to with MIDI.

Lastly, the market for this guitar synth stuff is incredibly tiny relative to the guitar and synth markets overall. It's admirable what Roland has brought to the table and the resources they've already made available to us. Use it for the most of what it is rather than object to what it isn't.

Cheers, TJ
 
May I ask, have you ever created from scratch your own new patches in a complex synth like ZenCore or even a simple one like a MiniMoog?

Yes, I have. More than once. And in even more complexed synths, such as Zebra 2.

Much, if not almost all, of what you are hoping for isn't available within these synths and that has nothing to do with MIDI.

In Zebra (or Alchemy, or Absynth, or Kontakt (to also include a sampler), etcetcetc., everything I am "hoping" for is there. Anything can be modulated.

Look at the knobs, switches, and controls on this MiniMoog and consider where the desired guitar-centric applications may be applied. There isn't much realistically to go with at the end of the line that isn't already available without all that guitar stuff.

Well, seriously, at least the Minimoog is a very basic synth given it's options (can't tell about the Zencore as it doesn't show the cobtrol tab. You should look into somewhat more complexed ones before telling me the things I'm "hoping" for aren't there. Because they are.

Point is, what can be done within a synth already exist with current tools.

But that's not the point at all. The point is, that the data these synths could deal with isn't extracted from your guitar playing (or extracted in a sort of "wrong" way). And that's all I have been saying from the beginning of this "discussion". It's absolutely proveable, too. Guitar-to-MIDI so far is an extremely limiting technology. Can it still be very useful? Of course (and I never said anything else). Could it still be vastly improved? Perhaps (it won't happen too likely, though, super niche market and all that...).
 
Let me give you an example of how limited Guitar-to-MIDI is:
Load the best (as in most realistic/authentic) sampled (or physically modeled, or hybrid...) guitar patch you have access to into your favourite sampler/synth. Now play that patch through a Guitar-to-MIDI system. Does it feel even *remotely* like playing a real guitar? The very clear answer is: No, it doesn't, not at all.

So why is that? It's likely not an issue of the receiving side, as those highly complexed newer libraries come with tons of CC support, key switches and what not. Maybe they're even coming with aftertouch support. Or, yet one step further, with MPE support, so you could simulate double bends, individual vibrato per note in a chord and what not.
And still, all that won't even remotely get you more realistic results as soon as you're using a Guitar-to-MIDI system to control it. You might be able to use some expression pedals but that was it already. Keyswitches, aftertouch, MPE? Nada (oh, you could use keyswitches - at the expense of having to adjust your playing yet some more). And at the same time, a lot of the things you might do naturally on a guitar will be lost.
A well trained keyboarder however will likely be able to get a pretty realistic sounding guitar performance out of that guitar patch/library. Go figure (or don't...).
 
They could at least try to take something the SY-300/1000 tech (not necessarily requiring a hex pickup) further
The synth sounds and effects in the box are STELLAR. Not polyphonic and there are certainly glitches no matter how careful you are with your technique. It's just as much about a company that is constantly improving and refining vs. well; Boss.
 
Fwiw, personally, I'm really only interested in polyphonic synth stuff. When it comes to leads, I'm absolutely fine with what guitars through whatever FX have on offer, but I really wish there'd be some more well working oddball chord sounds I could add to my arsenal. Unfortunatley, I'm afraid that for the really interesting things it'd still take a hex pickup to truly peak my interest (as in spending serious money).

And fwiw #2: In my world, none of that involves MIDI at all. I think of Guitar-to-MIDI as something extremely useful for some things, but for real playing, I vastly prefer the SY approach, as in actually using the string signal (and all of it's warts) as a source sound.
 
Let me give you an example of how limited Guitar-to-MIDI is:
Load the best (as in most realistic/authentic) sampled (or physically modeled, or hybrid...) guitar patch you have access to into your favourite sampler/synth. Now play that patch through a Guitar-to-MIDI system. Does it feel even *remotely* like playing a real guitar? The very clear answer is: No, it doesn't, not at all.

So why is that? It's likely not an issue of the receiving side, as those highly complexed newer libraries come with tons of CC support, key switches and what not. Maybe they're even coming with aftertouch support. Or, yet one step further, with MPE support, so you could simulate double bends, individual vibrato per note in a chord and what not.
And still, all that won't even remotely get you more realistic results as soon as you're using a Guitar-to-MIDI system to control it. You might be able to use some expression pedals but that was it already. Keyswitches, aftertouch, MPE? Nada (oh, you could use keyswitches - at the expense of having to adjust your playing yet some more). And at the same time, a lot of the things you might do naturally on a guitar will be lost.
A well trained keyboarder however will likely be able to get a pretty realistic sounding guitar performance out of that guitar patch/library. Go figure (or don't...).

Personally, I would not use a guitar synth to emulate a guitar. That's why I have a guitar(s) for that use. Which is part of the circular discussion we are having here :)

I may wish to try that with a synth and a controller like Marco Parisi who does a petty good Hendrix on the Seaboard Rise (that guy is incredible).


MPE, or at least an equivalent of MPE, is already possible with guitar synths that transmit individual MIDI channels per string like the SY-1000. I demo'd that in this video with the SY-1000 and Hydrasynth. The receiving MPE synth reacts is if the SY-1000 is sending MPE data because each string send in it's own channel.



I also used MPE with the SY-1000 and GS Music e7 in this video.



The GM-800 doesn't do individual MIDI channels per string though. This actually helps it's MIDI tracking because the data stream is much less.
 
Back
Top