Forum rss-feed


Developers: Thoughts about keygroup

Most Recent

written by: NothanUmber

Thanks for the explanation, much clearer now! Hope I understand enough now to further fledge out the transposable key area agent idea - hope to have something that is worth to discuss as a common base for "transposers" and "fingerers" and what not early next week.

written by: NothanUmber

Thu, 26 Jan 2012 11:08:10 +0000 GMT

Thought about the current implementation of keygroups.
Wouldn't it be more logical to have e.g. a key offset (that is added/subtracted from the key numbers) instead of tonic and octave offset - which are both musical and dependent on the scale and would thus be better assigned to scalers?
Then one could reuse keygroups for moving the "window" of keys and leave all music related mapping to the scalers.


written by: john

Thu, 26 Jan 2012 13:47:30 +0000 GMT

Hi Ferdinand

Interesting you should mention keygroups - we've been discussing some changes to them here this morning. We're actually moving towards a model (and are nearly there in fact due to changes implemented over the last few months) where the musical and physical mappings of keys are totally independent. It turns out that for a variety of reasons mixing the two up is bad. Sometimes you want one, sometimes the other. We are probably also going to split the existing Keygroup agent into two Agents, one that does mappings (both musical by course/key and physical by row/column) and one that does the multi-way destination switching. The advent of Workbench makes this a better logical seperation.

The octave/tonic values set in the keygroups doesn't actually do anything apart from add some metadata to the signal that the Scaler then uses to add the frequency. The general principle of being able to add 'hints' to a signal flow upstream of where they might be used turns out to be really useful as quite often one wants to mentally associate that configuration option with the signal switching or routing point, not the thing that actually does the work. So it's very useful to be able to think of a keygroup as 'having an octave' for example, even if it really doesn't actually do anything to the signals, it just passes that octave hint downstream. Remember that you can easily override these upstream hints in the Scaler. Scalers don't actually do a musical mapping of keys, they just add frequencies. People have used them to do musical mapping (in the absence of us making it easy to do it in other ways really) but it's certainly not the 'right' way to do it - it brings a host of limitations when done like that. Now that we are introducing (Al's just started work on a better editor this morning btw) easier key mapping with Workbench, hopefully this pattern will go away.


written by: dhjdhj

Thu, 26 Jan 2012 20:33:04 +0000 GMT

I don't know if this is of interest, but the way I did this with my Max library was to create an object through which you define a desired keygroup using a range.

The range specifies the row and col start ends so you can take any piece of the alpha you want and make it be a group (including just one key if you want).
The output is normalized (row and col always start at 0) and that can then be fed into an object that provides tuning, such as guitar, major, minor, or whatever.

In the image below, I have a group consisting of the first 7 rows, and all 5 columns. It gets processed by the Guitar object such that the first key will be MIDI note number 48 and all notes will be on channel 1. That gets sent into a synth. (The outlets represent notes, MIDI channel, and the X,Y,Z values)

Max image

written by: NothanUmber

Tue, 31 Jan 2012 22:02:38 +0000 GMT

Another question regarding the keygroup (and similar agents): The keygroup has a number of values that are not transformed in any way but only forwarded from the input to the output (breath, pedals, strip controllers etc.). So: Is the only way to keep data as a bundle to "route" all components of the bundle through an agent? If not: What would I loose if I connect e.g. the strip output from the keyboard agent directly to a synth agent instead of routing it through the keygroup, scaler etc. first?

All the best,

written by: NothanUmber

Tue, 31 Jan 2012 23:02:22 +0000 GMT

And: Doesn't the event ID stay the same even if the key information is transposed after it left the keyboard agent? So: Is it even necessary to route per-key infos like roll, yaw, pressure etc. through an agent that only fiddles with the key stream itself, can't that still be correlated afterwards from a synth agent that needs e.g. key, pressure and roll info:

keyboard>-key->transposablekeyarea>-key,keyold->scaler>-key,keyold,pitch->synth (key info is transformed by transposablekeyarea)
keyboard>-yaw->synth (yaw has to be correlated to key - should be possible because event ID stays the same?)
keyboard>-strippos->synth (has to be correlated to all key events with the same timestamp?)

written by: jim

Thu, 2 Feb 2012 14:27:55 +0000 GMT

If you routed the strip and yaw signals around the keygroup and scaler, they would still work. The keygroup and scaler agents don't change the event ID's. But they wouldn't be switched (since that's what the kgroup does), so they would be live all the time.

Some Agents do change the event ID's, so your mileage may vary. But most don't. The only times, offhand that I can think of where the ID's get changed are:

* Recorders - The recorder prefixes the event ID's to separate the live feed and the different takes
* The cycler - since it synthesises key presses

Also, in the standard setups, we feed multiple key groups into one instrument (usually into the rig) and these connections are made with the using flag to keep each keygroup input distinct. The using flag prefixes the event ID's coming into the connection. If you stuck a strip controller straight in with no using, it would apply to all the keygroups, all the time.

We are planning to make minor changes to the event ID correlation to help with things like the recorders, and with Agents like harmonisers and chord generators, where a higher level of grouping is required.

written by: NothanUmber

Thu, 2 Feb 2012 14:40:45 +0000 GMT

Hi Jim,

thanks for elaborating on this!
Can you please go into more detail what you mean with "But they wouldn't be switched (since that's what the kgroup does), so they would be live all the time. ". (So, is routing them through the keygroup an optimization, so e.g. signals from strip position are supressed when no key is pressed? In that case: Why do we need to route e.g. yaw/roll which nonetheless only produce values when a key is pressed?)

And: If I write an agent that "messes" with the key stream. Do I really have to change the event id or could I just leave it as it is, so I don't have to remap all other correlated events. too (and thus route them all through my agent although I don't do anything with them otherwise)?


written by: jim

Thu, 2 Feb 2012 16:01:38 +0000 GMT

The key group switches an incoming signal to one or more outputs. This includes its strip and breath, etc inputs.

All the inputs are switched. So if you want the strip input to an instrument turn off then you select away from that instrument, you should route it via the kgroup. Otherwise, imagine you had an instrument with a really long release. You could potentially trigger a note, switch away to a different instrument, but the strip would still affect the releasing note of the old instrument.

Or imagine that you set up a strip to pitch bend all the takes being played by a recorder (which you could do by leading the strip around the recorder instead of through it) Then when you switched away, the strip would still be live.

Of course, that might be what you want...

Its good practice to not change event ID's unless you really have to. The cfilter model preserves the IDs. The only reason to change them is usually because you are synthesising more than 1 concurrent output event from an input event (a chord generator maybe), or merging more than 1 input event (a fingerer)

If you do mess with the event ID's, it would be a good idea to allow for the routing of other signals via the agent so that (although the data is left alone) the event ID's are changed in a similar way.

This was quite hard in the past, because Event ID's were overloaded with the key number. That was the primary motivation for the new key input in 2.0.

I think a good rule of thumb is to think of an event representing a physical act (like hitting a key) If you change the key stream, I dont think you should change the event ID.

To go back to the chord generator example, a good thing to do there would be to suffix the primary incoming ID to generate the notes in the chord. So incoming event 1.1 would generate 1.1.1, 1.1.2, and 1.1.3. That way, the pressure roll and yaw signals from the triggering key would apply to all the notes in the chord that key generates without the chord generator being involved in the signal flow.

The next change will be to have Event ID's composed of two parts, a leading 'channel' part and a trailing 'voice' part. Correlation will be extended to match both parts separately using the current rules, two events correlating if both channel and voice correlate.

Harmonisers, fingerers and chord generators (Agents that deal with more than one key press) can use the channel part to identify key presses that belong together (being in the same channel)

At the moment there are problems if you put such an agent downstream of a recorder, because all the key presses from the different takes can't be sorted out.

Recorders will distinguish takes by adding the channel part. 'Using' will add to the channel part.

written by: NothanUmber

Thu, 2 Feb 2012 16:23:39 +0000 GMT

Thanks for the explanation, much clearer now! Hope I understand enough now to further fledge out the transposable key area agent idea - hope to have something that is worth to discuss as a common base for "transposers" and "fingerers" and what not early next week.

Please log in to join the discussions