Designing a custom Push2 instrument in Max for Live/JavaScript: Part 2
Posted on December 27, 2020This is part 2 of the two-part series on designing custom Push instruments in Max for Live (using JavaScript). In Part 1 we explored the LiveAPI object object, and used it to construct an abstraction that allows us to monitor when the track that our instrument lives on is selected or deselected. In this part 2 we will get to the fun stuff, and actually develop our Push instrument.
As a reminder, we want to develop an instrument that has a fully custom Push layout, but integrates nicely with the Ableton workflow: when the track of our instrument is selected, it will show the custom Push layout, but when another track is selected, we get the regular layout:
If you want to follow along, download Trichord.amxd, place it on a MIDI track, and open the patch in Max.
Trichords
What the instrument does, exactly, is not really the point of this blog post, but I’ll briefly describe it just to make it easier to see why we’re doing certain things. The goal is to develop an instrument that will allow us to easily explore Japanese pentatonic scales using trichords, a term coined in a video by Tommaso Zillio in his YouTube video The Simple Theory Of Japanese Music Scales. The basic idea is very simple: scales will consist of two trichords (sequences of three notes) like this:
C |
C♯ D D♯ E |
F | G |
G♯ A A♯ B |
C |
In other words, the C, F and G are always fixed, but then the note in between the C and the F and the note in between the G and the C can be varied. Some standard choices are
Scale | First choice | Second choice |
---|---|---|
Miyako-Bushi | C♯ | G♯ |
Ritsu | D | A |
Min Yo | D♯ | A♯ |
Ryu Kyu | E | B |
but non-standard choices are possible as well.
Our instrument makes this available to the player in a very direct manner: on the left are four buttons that can be used to pick the first note; on the right are four buttons to pick the second note; and in the middle are six buttons that can be used to actually play the scale. We also provide two dials to select a standard scale and a root note for the scale. See the short video on YouTube for a demonstration of how the instrument is used.
In the code, the theory of trichords is implemented in trichordstate.js
; this is just straight-forward JavaScript code with nothing that is specific to the Push or even to Max for Live or Ableton, so I’ll not discuss it further in this blog post. You might want to take a quick look to familiarize yourself with this module.
Patcher top-level
If you open the patcher in Max (and exit presentation mode), you will see
As before, the logic of the patcher is implemented by the [js]
object in the center of it all. Let’s briefly discuss the infrastructure around it. The top two boxes are not specific to our device, and would be useful in any custom Push instrument:
Top-left, “Initialize” box
As we saw in part 1, we cannot make use of the
LiveAPI
object until our device has been fully loaded. Therefore we will send our code aninit
in response to abang
fromlive.thisdevice
. We will also respond to messages fromlive.thisdevice
(on its second outlet) that tell us when our device is enabled or disabled, grabbing or release control over Push’s button matrix as appropriate. (We will see howcontrolButtonMatrix
actually works below.)Top-right, “Reset” box
At the end of Part 1 we discussed that it is important to explicitly delete observers. To make this convenient during development, pressing the reset button does three things: first, it deletes all observers; only then it reloads the JavaScript code, and finally it calls
init
again. The JavaScript code itself setsautowatch
to0
so that it doesn’t automatically reload when the JavaScript code is saved.
The bottom two boxes are specific to our instrument:
Bottom-right, “Parameters” box
Here we collect parameters that make up the state of our device. The two
live.dial
objects to select the scale and the root note automatically store their own state with the Live set without us having to do anything; for the two custom notes (which don’t otherwise have a corresponding UI element), we use a simplepattr
object to store their value. There are much more sophisticated way to handle pattern storage, but this simple approach will work just fine for our purposes. The only thing we have to make sure is to selectParameter Mode Enable
and we’re good to go. The rest of this box does simple routing, constructing and deconstructing messages of the formscale x
,root x
,custom1 x
andcustom2 x
.One thing of note here is the use of
live.banks
. In generallive.banks
can be used to manage large set of parameters; we use it here because apparently without it sometimes thelive.dial
s won’t show up on the Push. Fortunately it’s easy enough to use: just drop thelive.banks
into the patcher, open it, and add the two dials you want to be shown on the Push.Bottom-left, “Debug messages” box
This just contains some debug messages that we can send to our patcher during development. If you click on either of them, you will see that the Push will change its display and show all available colors. This can be helpful when trying to select some colours to use. (If you do use this, you might want to add a
post
to thebuttonPressed
function inpush.js
to log the colors of buttons to the console as they are pressed.) We will not make use of these messages in this blog post.
Finally, let’s discuss the MIDI routing at the bottom. When our device takes over control over the Push’s button matrix, we must generate our own MIDI events in response to button presses. We do this by generating MIDI notes and sending them to midiout
.
In addition, we also route midiin
straight to midiout
. The first reason we do this is that if our device is disabled (and the Push is showing the normal instrument layout), MIDI will work as normal. In addition, even if our device is enabled, we do not take control over the Push’s pitch bend strip, and so pitch bend information will still come in on midiin
. By routing this straight to midiout
, pitch bending will work as normal even when our custom instrument is used.
The JavaScript code
The bulk of the work happens in the JavaScript code. We will describe this in a bottom-up manner, starting with how we interface with the Push, and working our way up to how our specific instrument uses that code.
Interfacing with the button matrix
Both the Push controller and its button matrix will be presented by LiveAPI
objects. We will start by looking at the code for interfacing with the button matrix, see buttonmatrix.js
. Supposing we already have the LiveAPI
object corresponding to the Push, here’s how we find the button matrix:
function findButtonMatrix(push, callback) {
var buttonMatrixId = push.call("get_control", "Button_Matrix");
var buttonMatrix = new LiveAPI(callback, buttonMatrixId);
.property = "value"; // Monitor the buttons
buttonMatrixreturn buttonMatrix;
}
The Push exports a whole bunch of controls; here we are interested only in the Button_Matrix
, but feel free to explore: get_control_names
can be used to get a list of all of them. The property
that we are monitoring is the value
of the button matrix: this is how we can respond to button presses. Every time the user presses a button, the callback of the LiveAPI
object is called, and is passed the column, row and velocity.
The only thing we do in the ButtonMatrix
constructor is call findButtonMatrix
with a callback that makes these arguments available in a slightly more convenient format:
.ButtonMatrix = function(push, object, callback) {
exportsthis.buttonMatrix = findButtonMatrix(push, function(args) {
if(args[0] == "value" && args.length == 5) {
.call(object, args[2], args[3], args[1]);
callback
};
}) }
(we are omitting args[4]
here, which I think might be a MIDI channel? not entirely sure about that.)
The other important function in ButtonMatrix
makes it possible change the colours of the buttons. It is only a single line, but it just makes this more convenient and moreover explains how this must be done in the first place:
: function(col, row, color) {
setColorthis.buttonMatrix.call("send_value", col, row, color);
}
Finally, ButtonMatrix
exports a deleteObservers
function in the same way that OurTrack
does.
Interfacing with the Push
The Push
class in push.js
contains some low-level code for interfacing with the code, as well as some code to make working with the Push more convenient. Here will will primarily focus on the low-level code, and just summarize the rest.
First, we need to find the Push. This is done by the following code:
function findPush() {
var liveApp = new LiveAPI(null, "live_app")
var numControlSurfaces = liveApp.getcount("control_surfaces");
for (var i = 0; i < numControlSurfaces; i++) {
var controlSurface = new LiveAPI(null, "control_surfaces " + i);
if (controlSurface.type == "Push2") {
return controlSurface;
}
}
return null;
}
We enumerate all control surfaces, and stop at the first one whose type is Push2
. Obviously, if you need to code to work with multiple Push devices, you’ll have to adjust this. Note that we also make no attempt to deal with the push being plugged in or unplugged while our device is loaded: we run this code once when the device is first initialized, and then do not run it again. There is therefore definitely scope for improvement here, but that is outside the scope of this tutorial.
The other important low-level function that our Push
class provides makes it possible to grab or release control over the button matrix. Until we grab control, we cannot set the button colors nor listen to key presses:
: function(control) {
controlButtonMatrixif(!this.checkFound()) return;
if(control) {
this.controller.call("grab_control", "Button_Matrix");
var initColorsTask = new Task(initColors, this);
.schedule(10);
initColorsTaskelse {
} this.controller.call("release_control", "Button_Matrix");
} }
The most important part here is the use of grab_control
and release_control
. However, every time we release and then grab control again, any colors that we previously set will have been erased and the Push will appear completely blank. Our Push
class therefore remembers which colors have been set, and provides a function initColors
that we can use to set all previously specified colors after we grab control. There is however a subtlety here: in my testing I found that it does not work to set the colors immediately after grabbing control; we must introduce a tiny delay. Not entirely sure why this is, but setting up a Task
to run initColors
seems to resolve the problem.
This is basically all the low-level code in the Push
class. The main other feature the Push
class offers is that it stores actions for each button; these actions can be set using
: function(col, row, callback) {..} setAction
and the code then automatically makes sure to invoke the right action when a button is pressed; the callback is passed the column, row, color and velocity of the button. There is nothing Live or Push specific about that code, however, and so we will not discuss it any further in this blog post. We will see this used in the next section, however.
The trichord
instrument itself
The Trichord instrument itself is implemented in trichord.js
. We will not discuss this code in detail here; most of it is straight-forward, just implementing the functionality of our instrument. We will only give a summary, and comment on a few subtle aspects here and there.
Initialization
In init
we construct the Push
object as well as the OurTrack
object (which we developed in part 1); the latter we pass a callback that grabs and releases control over the button matrix as our track is selected and deselected, respectively.
We then set up the colors on the Push as needed, and construct some actions. The way we create the necessary actions is probably entirely natural to a functional programmer, but might require a word of explanation for programmers less used to functional languages. Here’s how we are setting up the colors and the actions for the buttons in the middle that allow us to actually play the scale:
for(var i = 0; i < 6; i++) {
.setColor(1 + i, 4, 64);
push.setAction(1 + i, 4, sendNote(i));
push }
The key thing here is that sendNote
is function that returns a new function:
function sendNote(note) {
return function(col, row, color, velocity) {
outlet(0, [48 + activeScale[note], velocity]);
} }
It must return a function with four arguments (button column, row, color and velocity) because this is what our Push
class expects. However, the function then ignores all of those arguments except for the velocity, and then outputs a MIDI note on the first (and only) outlet of our [js]
object. Note that when the button is released, this function will be called with a velocity of 0
, so we don’t have to worry about genering note-off MIDI commands. Finally, activeScale
here is a patcher global variable that contains a list of notes of the current scale; we update it as the user is modifying or selecting the scale (or the root note).
Routing
The processing of the routing messages (see the description of the “Parameters” box above) is all very similar, and so rather than defining one function per messages, we instead define an anything
function that can handle any of them:
function anything() {
switch(messagename) {
// Messages that update the state
case 'scale':
case 'root':
case 'custom1':
case 'custom2':
= arguments[0];
state[messagename] updatePush();
break;
// Messages we just forward directly to the push object
case 'showColors':
case 'controlButtonMatrix':
if(push != null) {
.apply(push, arguments);
push[messagename]
}break;
default:
error("Message '" + messagename + "' not understood\n");
break;
} }
Function updatePush
looks at the current state and updates the push accordingly, as well as updating activeScale
.
This function also deals with forwarding some messages straight to the Push object; we just call the selected function using apply
, passing along all arguments
.
Limitation: releasing control when device is removed
Unfortunately, our code has one limitation: if the user removes our device entirely from the track, then we do not release control over the button matrix and so the button matrix of the Push will not be useable until a new live set is opened (or the Push is rebooted). Users have a workaround: if the first disable the device before deleting it, all is fine. However, it would of course be better if that would not be necessary.
Unfortunately, it seems that this is difficult to do, and I’ve not yet found a way that works. There are at least three ways to be notified when a device gets deleted:
However, I have experimented with all of these, and did not manage to get any of them to work. It seems I am not the only one; there are some threads on the Max for Live forums about this topic (for example, [1] [2]), and I’ve noticed that some other Max for Live devices that customize the Push layout suffer from the same problem (for example, Push Colour Notes). If anyone manages to resolve this problem, please let me know!
Conclusions
Interfacing with the Push and designing custom Push instruments is not that difficult, as we have seen. There is however not that much information available on the topic, which makes getting started difficult. My hope is that this tutorial helps to address that problem, and that the code we have developed in this example instrument can be a useful starting point for other custom Push instruments. If you develop anything based on what I’ve described here, I’d love to know!
Further reading
- We implemented the bulk of our logic in JavaScript in this tutorial. I happen to believe that this is probably the better tool for the job, but if you prefer to stay within the Max for Live paradigm, you might want to check out the Ableton Push Programming Tutorials by Darwin Grosse.
- There is also an article by Gregory Taylor on Max for Live Focus: The Push 2 as a Max Controller, although that uses the Max for Live abstractions for interfacing with the Push developed by Jeff Kaiser. These are very well designed, but are intended to be used in Max in standalone mode. I could not get those abstractions to work within Live; Jeff indeed confirmed that this does not currently seem possible.
- Keith McMillen has a series of blog posts describing how to interface with MIDI controllers such as the Push.
- Finally, if you are willing to go real low-level, you can communicate with the Push directly over MIDI (this is in fact what Jeff Kaiser’s abstractions do, as far as I understand). The fine folks at Ableton have made the MIDI specification available on GitHub, which is unusual and laudable!