Serious questions. If pick scrapes, dive bombs, pinch harmonics, and alike were able to be communicated via MIDI (or any other protocol for that matter) to a synth, what would they be used for? What would they translate into? Would they open or close the filter? Increase the resonance? Create a controlled pitch slide? Generate some modulation of vibrato, tremolo, additive harmonic overtone content, or something else? Change the envelopes and how they react? Generate new notes? How would assign such a source of control to a modulation destination within the synth and what range of effect would it be?
Simple answer (still absolutely serious): Anything you like could basically be possible. As is, nothing is possible.
What amount of bit rate resolution would be needed to effectively make it all work without clogging up the data stream and not getting zipper like reaction to the end expression that is desired?
Data resolution shouldn't be an issue. You can go all mad with synths and record tons of controller information on the same track, all transmitted at once to whatever will create the final sound.
It's all (!) about the actual conversion process. That very process needed to be able to identify the differences between smooth glides and heavy slurs. It needed to identify dead notes as such and not as anything with an assigned pitch (which is what all systems right now are doing). Etc.
In other words: Anything not exactly sounding like a clear note when you play your guitar would have to be translated into controller data (because there's no "advanced note parameter" subset in the MIDI protocol).
As long as these things can't be done (and IMO we're faaar away from anything like that to happen, especially given that all this is a super niche thing anyway).
In addition, the receiving side would likely have to be adjusted to make use of that additional data, too - but on many synths that could be done already, you can basically use any controller information to control anything through direct CC routing, modulation matrixes and what not.
Lastly, what types of synth sounds would benefit from those guitar-centric playing styles?
Anything you want.
In fact, even if you didn't explicitely put any of that additional controller data to dedicated use, everything would still profit. Because in case the guitar signal could be dissected like that (which, as said, I doubt will happen any day soon), the additional information could as well be filtered out a lot easier, resulting in a cleaner signal to arrive at your synths MIDI in.
As an example: You're playing some funky rhythm stuff, including lotsa dead notes. Exactly the kind of stuff that Guitar-to-MIDI is translating pretty badly. Now, in case our super hypothetic new Guitar-to-MIDI converter could dissect the clean note data from the mutes and turn the latter, say, into CC data, you could just ignore that on the receiving end and only use the clean chord data for whatever synths. Could for instance be a 4 part horn section, typical drop 2 chords as used often in funky guitars translate to horns extremely well - and all of a sudden, your funky guitar would be doubled by a horn section (which can sound quite great, been doing that kinda stuff already...) - in realtime, simply because all those pesky dead notes would be ignored by the horn section (which is the trick to make this work).
On other patches you may want to explicitely make use of that additional data. Percussive dead notes could surely work great on any synth patches sounding a bit, say, clavinet-ish (on a realistic sounding clavinet patch they likely wouldn't, but who knows...). But for that to work properly, the converter would have to translate them into additional CC data, too - which you could then use to control, say, an envelope, just so the mutes you play have their sort of equivalent on the synth used.