Skip to toolbar

Community & Business Groups

Music Notation Community Group

The Music Notation Community Group develops and maintains format and language specifications for notated music used by web, desktop, and mobile applications. The group aims to serve a broad range of users engaging in music-related activities involving notation, and will document these use cases.

The Community Group documents, maintains and updates the MusicXML and SMuFL (Standard Music Font Layout) specifications. The goals are to evolve the specifications to handle a broader set of use cases and technologies, including use of music notation on the web, while maximizing the existing investment in implementations of the existing MusicXML and SMuFL specifications.

The group is developing a new specification to embody this broader set of use cases and technologies, under the working title of MNX. The group is proposing the development of an additional new specification to provide a standard, machine-readable source of musical instrument data.

w3c/smufl
Group's public email, repo and wiki activity over time

Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.

Final reports / licensing info

Date Name Commitments
MusicXML 4.0 Licensing commitments
SMuFL 1.4 Licensing commitments
SMuFL 1.3 Licensing commitments
MusicXML Version 3.1 Licensing commitments

Chairs, when logged in, may publish draft and final reports. Please see report requirements.

Co-chair Meeting Minutes: August 27, 2019

MNX-Common by Example

Adrian has been working on how octave shifts should be handled in MNX-Common, and has prepared a pull request #158 to address issue #111, which covers both the specification for and examples of the proposed octave-shift element. The co-chairs welcome feedback from the community about this new specification element. You can view the updated MNX-Common by Example file without checking out the branch here.

If you have any comments about this proposed change, please leave a comment in the pull request.

Adrian plans to continue work on spanning elements since he has now got a good feel of the issues these notations present, which will address issue #114 among others. He also plans to add a further example for the octave-shift element that applies only to a single voice or sequence.

MusicXML 3.2 and SMuFL 1.4

Michael and Daniel did not have any updates on the MusicXML 3.2 or SMuFL 1.4 projects for this meeting.

Next meeting

The next co-chair meeting will be on Tuesday 10 September.

Co-chair Meeting Minutes: August 13, 2019

MNX-Common by Example

Adrian has added a section on time signatures to the MNX-Common by Example page to illustrate the approach MNX-Common takes to time signatures, showing how time signatures are shown in the global element. The co-chairs discussed whether or not we should include the optional index attribute to each measure element that makes the bar number explicit, but decided to keep it as simple as possible for the purposes of this example.

The co-chairs discussed the issue of multi-metric music, which requires multiple global elements and the assigning of specific parts to the time signatures in each global element. There was also further discussion of whether index should be compulsory, which would enable a sparse representation of the global element, i.e. only including those measure elements in which there is a change of time signature, but this would require making index compulsory and it was decided not to revisit this prior decision again now.

Octave lines

Adrian is next going to look at giving examples of octave lines in MusicXML. Octave lines are an example of a spanning element, which appear either in the directions attribute of a measure element (if they apply to all voices) or of a sequence attribute (if they apply to a specific voice). We need to be clear about the end attribute and whether or not the note or event at that position is transposed by the octave line.

Adrian will work up an example for discussion and then move to adding octave lines to the spec. This will be relevant to issue #111.

MusicXML 3.2 and SMuFL 1.4 update

There was no discussion of MusicXML 3.2 or SMuFL 1.4 in this meeting.

Next meeting

The next co-chair meeting will be on 27 August 2019.

Co-chair Meeting Minutes: July 30, 2019

Splitting MNX-Common and MNX-Generic

The MNX-Common and MNX-Generic specs have now been split and the pull request merged back to the master branch. The MNX-Common by Example page is also now part of the main MNX-Common repository and is linked to from the main page. This closes issue #98.

Issues under Active Review

Next on Adrian’s list is to pick one of the issues currently under Active Review and to flesh out a proposal by adding more information to the MNX-Common by Example page.

MusicXML 3.2 update

Michael has not made any specific progress on the MusicXML 3.2 specification issues since the last meeting.

SMuFL 1.4 update

Daniel has not made any specific progress on the SMuFL 1.4 issues since the last meeting.

Next meeting

The co-chairs will next meet on Tuesday 13 August 2019.

Co-Chair Meeting Minutes: July 16, 2019

MNX-Common by Example

Adrian’s colleague Edward has made a responsive design for the MNX-Common by Example page, which looks good. The next steps are to merge this page to the MNX repository, and then to expand with further examples.

Splitting MNX-Common and MNX-Generic

Adrian didn’t create a pull request for splitting MNX-Common and MNX-Generic, but has now done so: pull request #157 exists to do this. We’ll give the community a week for any feedback on the pull request before we merge it.

DAISY Braille music working group

Haipeng Hu, representing the DAISY Braille music working group, has joined the community group and has raised a couple of new issues in the MNX repository, #155 and #156. The co-chairs have triaged the issues and assigned them to the MNX-Common v1.0 milestone. We welcome any feedback from the community on the requirements expressed in these two issues.

Michael said that he will also consider and respond to Haipeng’s issue #286 in the MusicXML repository in due course.

MNX-Common issues under active review

The co-chairs also discussed the issues under active review in the MNX-Common repository. All of them are currently waiting for Adrian’s proposals in response to community feedback on the issues of layouts and and he proposes that he will fold those proposals into the MNX-Common by Example page in due course.

Next meeting

The next meeting of the co-chairs will be on Tuesday 30 July 2019.

Co-chair Meeting Minutes: June 18, 2019

Splitting MNX-Common and MNX-Generic specs

Daniel and Michael have reviewed Adrian’s proposal branch for splitting the MNX spec into separate MNX-Common and MNX-Generic specs for issue 98. Adrian will incorporate this feedback and then create a pull request for community review.

MNX-Common by example

Adrian has a designer working on the prototype MNX-Common by example page. Once this design work is done, more examples will be added and the page will move into the MNX repository at GitHub.

Next meeting

Due to vacations, the next co-chair meeting is scheduled for Tuesday, 16 July.

Co-chair Meeting Minutes: June 3, 2019

DAISY Braille Music working group

Following our last meeting, Michael worked through the issues submitted by the DAISY Braille music working group and sent a detailed response to project coordinator Sarah Morley Wilkins, Matthias Leopold, and Haipeng Hu. We are now waiting on a formal response from the DAISY group.

Splitting MNX-Common and MNX-Generic specs

Adrian has created a new branch where the changes to split the MNX-Common and MNX-Generic specifications are now in progress. Michael and Daniel will review the newly-split specifications, and once the co-chairs agree Adrian will proceed with a pull request for community review.

MNX-Common by example

When Adrian was working on splitting the MNX-Common and MNX-Generic specs, he came to the realisation that no developer would really be able to plough through the spec and clearly understand the differences between the well-understood MusicXML format and the new MNX-Common, so he started working on a page that shows a number of examples encoded in both MusicXML and MNX-Common, to provide simple illustrations of the differences. This is at a very early stage, but Adrian wanted to seek consensus from Michael and Daniel about whether this is a good approach. Michael and Daniel agree enthusiastically that this is a very valuable approach.

Adrian wonders whether we could even use this approach as a means of coming up with concrete syntactic proposals to address issues, so that the process would be to produce a worked example before creating a pull request on the specification. The goal would be to encourage wider participation by lowering the barrier to entry for members of the community to be able to discuss concrete examples. Michael and Daniel agreed that this would be a great approach.

Adrian will look into integrating the examples into the main MNX repository and do some reorganisation of the code to allow easier expansion. Adrian will also add some further examples to the current page, starting with examples that encode the recent decisions on sounding versus written pitch. In order to do this, we’ll need to address issue #111 and specify the way transposition will be encoded. Michael proposes we should follow the pattern suggested by the MusicXML transpose element, but use more attributes rather than separate child elements. Adrian will put together an initial proposal of how this might look.

Next meeting

The next meeting is scheduled for Tuesday 18 June.

Co-chair Meeting Minutes: May 14, 2019

Clarifying effect of octave lines on written pitch

The co-chairs agreed that pull request #152 is ready to merge in. Consensus is that octave lines do have an effect on the way the pitch is encoded, so if you have a middle C (which would in Scientific pitch notation be C4) with an 8va octave line written above it, it will be encoded as C5, i.e. transposed up by one octave. This also closes issue #4.

Realisations and layouts

It’s still on Adrian’s to-do list to move any remaining relevant comments from issue #138 onto issues #34 and issues #57. This is currently the second item on Adrian’s priority list.

Splitting MNX-Common and MNX-Generic

Adrian has been in touch with Joe and has completed the transition of the back-end process for automatically compiling the Bikeshed file into the specification from travis-ci.org to travis-ci.com, and has added a Contributing document to capture the details of these processes.

Adrian is going to add further detail to the document to clarify the process for dealing with incoming issues, which is in brief summary:

  • Any CG member can raise an issue.
  • Once the co-chairs have discussed it, it will get assigned to a milestone reflecting the current understanding of the issue’s priority.
  • The issues that are currently being discussed with a view to closing them and creating pull requests to add to the spec are tagged with the Active Review tag.
  • In theory, no issues will be worked on that are neither assigned to the V1 milestone or have the Active Review tag.

The co-chairs agreed that we should not split the MNX repository into two separate ones for MNX-Common and MNX-Generic, but rather we will have two Bikeshed files, one for each specification. This is the highest item on Adrian’s list.

DAISY Braille music group

Daniel reported that he today attended a meeting of the UKAAF Music Subject Area working group and the issue of the DAISY Braille music group’s collected requests for MusicXML came up. Daniel asked Michael on behalf of the UKAAF group to review the collected list of MusicXML requirements from the DAISY group and indicate to them which are already supported in MusicXML, which are out of scope for MusicXML and should be created as issues for MNX, and which could be added and considered for MusicXML 3.2.

SMuFL status

Daniel reported that he’s been working on tooling for SMuFL and is nearly at the point where he can create a new release of Bravura to match SMuFL 1.3 that addresses some outstanding issues with the metadata file for the font. Once that is done, he plans to start working on the first issues in the SMuFL 1.4 milestone.

MusicXML 3.2 status

Michael is planning to have some discussions with other developers to determine whether some of the issues that are being considered for the milestone are of broader interest than just to MakeMusic.

Michael expects this process to take another few weeks, and once that is complete he will start the process of assigning issues to the active milestone and create the first pull request to start the process.

Next meeting

The next meeting is planned for Monday 3 June.

Co-chair Meeting Minutes: May 1, 2019

GitHub infrastructure

Following some confusion with pull requests on the MNX repository, Adrian has been discussing with Joe how we should handle the branches on the MNX repository. Joe originally set up a Travis CI job on the master branch to automatically run Bikeshed on the files and push the changes to the gh-pages branch automatically. Joe suggested that we maintain his automatic process until Adrian can set up a corresponding set of automated steps for his own GitHub user. The issue is complicated by a new release of Travis CI since the time Joe originally set things up on GitHub. Adrian is in touch with GitHub and Travis CI about this.

The upshot is: all MNX commits, pull requests, etc. should be made on master, and these will all be automatically moved to gh-pages.

Adrian plans to write a CONTRIBUTING Markdown readme to add to the MNX repository. Once this is done, we will add corresponding versions of this file to the other two repositories.

Note that in the MusicXML and SMuFL repositories, commits are currently made on the gh-pages branch because there are no automated travis-ci processes in place on those repositories.

Splitting MNX-Common and MNX-Generic

The reason Adrian has not yet split the MNX-Common and MNX-Generic specifications is that the automated GitHub process for deploying the specification is hardwired to produce a single Bikeshed-based specification. Once the infrastructure changes above are complete we’ll get this split completed. Adrian has updated #98 to this effect.

Written vs. sounding pitch pull request

Adrian merged pull request #148 and the co-chairs discussed the process for reviewing pull requests. We reaffirmed that there is no specific time window for reviewing pull requests, and the co-chairs assume that if there is no dissent from members of the CG within a few days of the pull request being proposed, this will be taken as assent and the pull request will be merged.

Octave lines and sounding pitch

Adrian requested that Michael and Daniel review pull request #152 so that the co-chairs can come to a consensus on how or whether octave lines affect sounding pitch. Adrian has closed issue #4, but Michael suggested that we might consider reopening #4 until pull request #152 is settled.

Participation of co-chairs in issue discussion

One member of the community group contacted the co-chairs privately expressing frustration that proposed pull requests went unreviewed and uncommented by the co-chairs for an extended period of time. The co-chairs agreed that we need to do a better job of keeping on top of the comment threads in issues and review pull requests in a more timely fashion.

Realizations and layouts issue

Michael reminded Adrian that following our meeting at Musikmesse a few weeks ago, the plan is to close issue #138, but not until moving or referencing the comments into other relevant issues. Adrian agreed to undertake this so that issue #138 can be closed.

The next co-chair meeting will be on 14 May 2019.

Co-chair Meeting Minutes: April 16, 2019

MNX-Common

  • Discussion of #4. Adrian will post a comment saying that we are going to encode written pitch. He’ll follow it up with a pull request to change the spec to fill that part in. Initial pull request will clarify that we will encode written pitch rather than sounding pitch. We will also add definitions for written pitch and sounding pitch to the specification.
  • Some discussion about whether “written pitch” includes octave transposition (e.g. from octave lines or clefs with octaves indicated). Michael proposed that we should use the same approach as MusicXML, i.e. to include the totality of octave transpositions. Daniel and Adrian were unsure about this, since on its face it goes against the views expressed in the meeting by developers that they want the pitch encoded to match the displayed page, but Michael thinks we should do what MusicXML does and see whether there are objections.
  • Michael to see whether there is a good definition of written vs. sounding pitch in the MusicXML schemas or tutorial.
  • Discussion of #138. Adrian will post the same comment as in issue #4. We propose to then close this issue after bringing the relevant parts of Joe’s original proposal for realisations and layouts to the other issues, e.g. #34 for differences between full scores and parts, and issue #57 for system/page flow.
  • Following on from the pull request, Adrian will review Christina Noel’s proposal for how pitch might be encoded and consider whether or not this could be the basis of the approach.

MusicXML 3.2

  • We discussed how we should handle additional issues for MusicXML 3.2, including Cyril’s proposal for handling swing, and whether Michael should bring those issues into scope for the next release. We agreed that if Michael is happy to handle the specification duties, he can add them to the milestone without heavyweight co-chair review.
  • Adrian expressed concern that the approach suggested in #283 is possibly not the most semantic approach, and could break display and playback into separate elements. Michael believes these can indeed be combined in one element and address Adrian’s concern.
  • We discussed what to do with issues in the MusicXML repository that we have agreed to handle in MNX-Common. Michael will link them to an existing or new issue in the MNX repository and then close the issues in the MusicXML repository.
  • In the next week or so, Michael plans to put together the first pull requests for MusicXML 3.2, e.g. upping the version number and switching back to the Contributors License Agreement (CLA) for the development period, etc.

The next co-chair meeting will be on 30 April 2019.

Musikmesse 2019 Meeting Minutes

The W3C Music Notation Community Group met in the Apropos room (Hall 3.C) at Messe Frankfurt during the 2019 Musikmesse trade show, on Thursday 4 April 2019 between 2:30 pm and 4:30 pm.

CG co-chairs Michael Good, Adrian Holovaty, and Daniel Spreadbury chaired the meeting, with 29 members of the CG and interested guests attending. The presentations from the meeting are posted at:

W3C MNCG Musikmesse 2019 Presentation

Daniel Ray from MuseScore recorded the meeting and has posted the video on YouTube. The video starting times for each part of the meeting are included in the headings below.

Attendees (3:42)

After Michael gave an introduction to the Music Notation Community Group, we started the meeting by having each of the attendees introduce themselves. Here are the attendees in alphabetical order by organization:

  • Dominique Vandenneucker, Arpege / MakeMusic
  • Dorian Dziwisch, capella-software
  • Dominik Hörnel, capella-software
  • Markus Hübenthal, capella-software
  • Bernd Jungmann, capella-software
  • Christof Schardt, Columbus Soft
  • Matthias Leopold, Deutsche Zentralbücherei für Blinde
  • James Sutton, Dolphin Computing
  • Karsten Gundermann, self
  • Bob Hamblok, self
  • James Ingram, self
  • Simon Barkow-Oesterreicher, Lugert Verlag
  • Mogens Lundholm, self
  • Michael Good, MakeMusic
  • Daniel Ray, MuseScore / Ultimate Guitar
  • Gerhard Müllritter, Musicalion
  • Johannes Kepper, The Music Encoding Initiative
  • Tom Naumann, Musicnotes
  • Christina Noel, Musicnotes
  • Reinhold Hoffmann, Notation Software
  • Martin Marris, Notecraft Europe
  • Heiko Petersen, self
  • Alex Plötz, self
  • Dominik Svoboda, self
  • Martin Beinecke, SoundNotation
  • Adrian Holovaty, Soundslice
  • Frank Heckel, Steinberg
  • Daniel Spreadbury, Steinberg
  • Cyril Coutelier, Tutteo (Flat.io)

capella-software Sponsor Introduction (8:10)

capella-software sponsored this year’s meeting reception. Dominik Hörnel, capella’s CEO, introduced the company and its product line. Most of the company’s products support MusicXML.

capella is following (and sometimes contributing to) the MNX discussions with great interest. Dominik thanked Joe Berkovitz for his pioneering work on MNX and welcomed Adrian for continuing Joe’s work.

Introduction from Adrian Holovaty (12:49)

Adrian offered his own introduction given that this was his first meeting while serving as co-chair. He works on a web site called Soundslice which is focused on music education. Soundslice includes a notation rendering engine which consumes and produces MusicXML. In another life, he was the co-creator of the Python framework Django, from which he has retired as one of the Benevolent Dictators for Life. He is looking forward to continuing Joe’s work on MNX as co-chair of the MNCG.

SMuFL 1.3 and 1.4 (14:25)

Daniel provided a quick update on SMuFL 1.3 and SMuFL 1.4. There are currently 16 issues in scope for SMuFL 1.4, including improvements to font metadata, numbered notation, and chromatic solfège. More issues are welcome.

Alex asked how many glyphs are included in SMuFL so far. Daniel wasn’t sure (he believes around 3,500) but will find out. After the meeting he reported that SMuFL 1.3 has 2,791 recommended glyphs and 505 optional glyphs. The Bravura font currently has 3,523 glyphs, including 227 glyphs that are duplicated at standard Unicode code points.

MusicXML 3.2 (18:49)

Michael presented the current plans for an MusicXML 3.2 release, developed together with the group’s ongoing work on MNX. There are currently about 20 open issues in the MusicXML 3.2 milestone, focused on improved support for parts, improved XML tool support, and documentation clarifications. MusicXML 3.2 does not try to address any of the new use cases for MNX, or feature requests that are better handled in a new format that does not have MusicXML’s compatibility constraints.

Michael opened up discussion about whether the group believes it is a good idea for MusicXML 3.2 development to proceed in parallel with MNX development. Christina expressed concerns about splitting the group’s energies between MusicXML and MNX, but that this also is a question of when we expect to see MNX-Common finished. Reinhold followed up saying that it is not just when MNX-Common is finished within the community group, but when will it be implemented by major vendors for both import and export, and when there will be music available in MNX-Common format.

Adrian believes that a huge part of the solution to MusicXML and MNX-Common co-existence and migration is to have automated converters between MusicXML and MNX-Common. Adrian was more optimistic than Michael on the timeline for finishing MNX-Common within the community group.

Daniel Ray believes that MusicXML and MNX will co-exist for some time. Specification, conversion, and adoption are three phases for MNX development. He asked if there are ways to speed up MNX and MusicXML development. For instance, could MusicXML development be broken up into smaller fragments by released more frequently? Michael responded that having more contributors and more early implementations can help to speed things along. Given MusicXML’s compatibility requirements, it is important to have multiple working implementations of major new features before a final MusicXML version release.

James Sutton asked about the need for parts support, since parts can already be generated from the data in the score. Michael replied that formatting, for example, is a current customer concern. Manual formatting that is not the same as automatic formatting currently gets lost in transfer between programs.

James Sutton also asked about using differences between score and parts instead of full copies of both score and parts. Christina and Michael replied that this is more the approach that MNX adopts. Specifying differences can get very complex very quickly. MusicXML’s simpler approach can lead to a faster release, while we work out a solution for MNX-Common that is more appropriate for native applications.

James Ingram is in favor of continuing MusicXML and MNX as separate projects, but we need to keep clear the interface and boundaries between the two projects. We do not know if MNX is going to succeed, so we need to keep improving MusicXML in the meantime.

Frank elaborated on an earlier question from James Sutton about importing score and part differences into a notation program if MusicXML 3.2 treats them as separate sets of data. Michael replied that this is already implemented for the next maintenance update for Finale, as well as for a future implementation of SmartMusic. These implementations use XML processing instructions rather than the new features planned for MusicXML 3.2. We will want to test this during MusicXML 3.2 development to make sure that other vendors also find this usable for their products with their different implementations of score and part relationships.

Michael summarized the results of the discussion as having general support for continuing MusicXML development alongside MNX development. However we should be careful to limit and target the scope of MusicXML releases so we do not slow down MNX development. This sense of the room matched what we heard earlier in the year at the NAMM meeting.

DAISY Consortium and Braille Music (49:50)

Matthias Leopold from the Deutsche Zentralbücherei für Blinde (DZB or German Central Library for the Blind) introduced the work of the DAISY Consortium on Braille music.

Braille music is the international standard for helping blind musicians, developed by a Parisian blind organist. Organizations around the world provide manual braille music translation. This relies on a diminishing pool of expertise, so the goal is provide fully automatic translation of music scores into braille music. Braille music has some specific challenges because of the optical nature of printed music notation compared to the more semantic nature of Braille notation.

Some software is at least partially accessible to blind musicians – for example, capella has supported this for a long time. But there are lots of problems with programs such as Sibelius – and it is necessary to provide better software tools.

The project is an initiative of Arne Kyrkjebø from the Norwegian Library for the Blind. Dr Sarah Morley-Wilkins is coordinating the project from the UK; they are relying on the input from Haipeng Hu, a blind musician from China.

Because Braille music relies on semantic relationships, not just optical relationships, there are issues that need improvement with original source files, conversion tools, and making sure that all concepts can be expressed directly in MusicXML and MNX-Common.

Simon from Forte asked if there was a good way to evaluate the quality of the MusicXML files exported from his tool. Matthias replied that this is a goal of the group (to have a tool to do this) but it is a difficult problem – how can you automatically evaluate what the right results are?

Daniel Ray commented that there is an initiative at MuseScore to improve support for braille music in the OpenScore project.

Martin Marris commented that the change to using a Qt UI in Sibelius 7 prevented access to the UI elements. Matthias says he’s not seen any blind users running Sibelius but his understanding is that it is the most accessible of the main notation programs.

MNX (59:19)

Adrian provided an update on Joe Berkovitz’s whereabouts (he is a successful sculptor and a grandfather) and what Adrian has been doing for the past few months. He has put together an introductory document for MNX-Common to explain what it is intended to be and why we are doing it. This is posted on the MNX GitHub page.

A second thing we have done recently is to decide that MNX-Common and MNX-Generic would be two separate formats. The original idea was that an MNX file could be a more generic SVG-type file or a more semantic MNX-Common file, and that the application opening the file would have to decide what to do about it. We decided against this for three reasons: it would be confusing for end users, confusing for developers, and there is no huge benefit in combining them. So we have decided to split the specification into two, but have not yet done the work.

Bob asked whether we will call the two formats different things. Adrian answered that MNX was always intended to be a codename, but we need to settle on some names soon. Michael said that we would like the names to indicate that they are a family of specifications, since we would like to provide semantic support for other kinds of music notation in the future.

Written and Sounding Pitch Introduction (1:05:25)

We have a fundamental decision to make for MNX-Common: should pitches be represented with written pitch or sounding pitch? These pitches can be different for transposing instruments either in parts or a transposing score.

Adrian outlined the options of storing sounding pitch, storing written pitch, and variations that allow duplicate or alternate pitches. This question of how to represent pitch gets at some big questions about MNX-Common:

  • Are we encoding musical ideas or musical documents?
  • Do we prioritize the performer’s or listener’s perspective?
  • What is the ground truth: the sound or the visual display?
  • How important is XML readability vs. a reference implementation?
  • Is redundancy between score and parts inevitable, or is every part-specific notation derivable from a score given the right hints?

Written and Sounding Pitch Discussion (1:15:02)

James Sutton said that philosophically, data duplication is terrible as it introduces errors due to mismatches, so we should store minimal data. Because music is sound, we should store the sounding pitch, and the transpositions can be handled algorithmically.

Christof advocated for the exact opposite, storing written pitch. We didn’t enjoy programming MusicXML. It suffered from different implementations from different vendors and was too flexible. Developers need to have a comfortable, understandable format. He likes being able to see the direct correspondence between the page of music he is looking at and the XML markup, and not be forced to transpose mentally. Any kind of transposition will often cause changes for stems, slurs, and much more. Make it as joyful as possible for developers to implement this format. The decision should be guided by practical concerns more than philosophical questions.

Adrian asked whether having a reference implementation would help? James Ingram said that the reference implementation would not be a substitute for the readable XML markup. Christof said that a reference implementation would not fit into his development workflow.

Johannes disputed that philosophically music is sound. If so, we have never heard music from Bach, Beethoven, or Mozart since all we have from them is paper. Printed or written music is also music. Technically, it’s hard to understand transposing instruments when scanning a printed score using OMR. It would be easier for scanning software to adjust the transpositions than to change all the pitches. He sees no reason to encode the sounding pitch as primary, though both written and sounding pitch could be present – that is a separate discussion. If only one pitch is to be encoded, it should be the written pitch. This is also the way both MEI and MusicXML do it.

Christina says that from a philosophical standpoint the piece of sheet music is the composer’s way of communicating to the musician. Sound is the end result of the musical idea. However, as a person who works for a publishing firm, we work very hard on the details of the displayed music notation. These details need to be correctly specified in the files we are importing and exporting. Even if we are going to turn it into something else, we need to be able to reproduce the original written document. She feels we should write both the written pitch and the sounding pitch so that both can be available.

Dominik Svoboda thinks both sound and the visual display are ground truth. If parts are messy, you annoy all of the players in your orchestra. Visual display is equally important to the sound. Perhaps we could use AI or neural network technology to bridge the gaps between written and sounding pitch?

Daniel Ray questions whether there is a consensus for what MNX is for: is it for the composer, for the performer, for the developer? Everything should be in service for the end user; anything that simplifies something for the developer should only be in service of simplifying things for the end user. What excites him is a common native format, because it would maximize everybody’s investment in the format compared to an exchange format. On the subject of what should be in the file format: transposition shouldn’t be in there. Instead, the instrument should be defined, so that the software can infer transposition information based on knowledge of the instrument. We may also be too focused on a fixed presentation format, e.g. a publisher’s printed page; instead we should be designing and thinking about a more fluid representation of music notation. What are the unique advantages of digital technology versus making more portable paper?

Christina said it’s important that information about layout must be possible to encode, else publishers won’t buy in. But these kinds of visual things should be as separate as possible from the representation of the semantic information. Musicnotes supports the idea of responsive layouts, but it is difficult to do that and have everything still look nice. If you have to specify every single written and sounding pitch for every note, it becomes very complicated very fast. Written spellings are very important for layout purposes.

Daniel Ray asked what delegations should exist. Currently the publisher and the composer decide everything and the consumer simply consumes what they’re given. In the digital world we can delegate some of those decisions to the user, e.g. read a clarinet in B flat part in alto clef if they really want to. Adrian said that regardless of what solution we come up with, that will always be possible. Changing instruments, transpositions etc. on the fly will always be possible, regardless of the decisions made in the choice of pitch encoding.

Bernd strongly supported Christof’s initial point to encode the written pitch. If we were only going to encode sounding pitch, we wouldn’t need to encode anything beyond what MIDI can define. It seems impossible that any automated conversion from MusicXML from MNX will be 100% accurate. Longer term the need will be for dedicated MNX import and export to avoid a lossy intermediate conversion to or from MusicXML. There are many cases where enharmonic decisions are editorial choices, and those choices need to be encoded. The capella format uses written pitch. The people dealing with scores are the players, and what they are talking about – the written pitch – is the important thing.

Reinhold agrees with both Christof and Bernd. We should use written pitch, capturing the editorial decisions while still being able to play the music correctly.

Mogens says that for him music is sound, and resolving the transposed pitch is possible algorithmically if we know the source and destination keys. He likes the idea of encoding both pitches because then it makes everything possible, and the redundancy is not so bad. For instance, he likes the idea of making it possible to specify a different playback pitch without affecting the notation to capture the specifics of a particular mode or idiom, e.g. microtonal inflections in folk music.

James Ingram thinks that doubly encoding written and sounding pitch is the answer, especially for being able to handle microtones. For sounding pitch we could use an absolute pitch. This could be done by combining a MIDI note number with cents, e.g. 60.25 would be a quarter-tone higher than middle C.

Based on his experience with making braille translations, Matthias believes that software that needs to do different things needs different formats. Building one format for programs that could do everything gets very complicated. We might want do divide into four or five different formats for different types of applications, and provide transformers between those formats.

Cyril would prefer to have the written pitch encoded. Whatever we choose, we need to be able to transpose accurately. This requires both the chromatic and diatonic information for transpositions. MusicXML does not require both, which causes problems. We should try to fix this for MNX-Common. We also need layout information, including global layout and part layout.

Bob said we are encoding notation, and the performer is using notation to make music for the listener, so we need to focus on the performer rather than the listener. From this perspective it is clear that we must encode the written pitch.

Frank said that the harp is another interesting example. Even though it is not a transposing instrument, it is often written in an enharmonically different key because of the technical way the instrument is played. For instance, B major will often be notated C flat major for the harp. This type of special case should not be forgotten in the encoding, however it is done.

Dominique does not want the pitch data to be duplicated to avoid consistency issues. We need to focus on encoding notation because this format is for notation apps, not MIDI sequencers. Encoding written pitch is better for notation apps because it is one line of code to get to sounding pitch, but the reverse is not true. If we want to use this format as a native format, we need the data to match what is displayed on the screen as much as possible. If we have to transpose the notation on the fly just to display it, that will be difficult to do and slow down the native apps, while transposing for playback is very easy to do.

Martin’s only objection for using sounding pitch is that the enharmonic choice of the written pitch is an editorial decision. Editors often make enharmonic changes in parts and it’s important to be able to encode them.

Simon said that there could be programs that don’t care about sound at all. Having the written pitch as the primary data is easier for those programs to deal with.

Dominik also spoke in favor of using written pitch. We all agree that we have different perspectives on the music and that there are semantic, performance, and presentation descriptions that should all be present in the specification. The semantics are about music notation: not about how music sounds, but how it is written. Therefore we should use written pitch.

As a non-developer, Tom would also opt for the written pitch. We are trying to encode the recipe for the muffins, not the muffins themselves.

Christof says that Joe started this work defining a set of roles but missed developers. We should try to make it fun to develop using MNX. End users will benefit from developers joyfully bringing them these features.

Michael closed the meeting. It was great to hear from many more voices that had not been present in our online discussions. For next steps, the co-chairs will now come up with a final proposal to present to the group.