Skip to toolbar

Community & Business Groups

Music Notation Community Group

The Music Notation Community Group develops and maintains format and language specifications for notated music used by web, desktop, and mobile applications. The group aims to serve a broad range of users engaging in music-related activities involving notation, and will document these use cases.

The Community Group documents, maintains and updates the MusicXML and SMuFL (Standard Music Font Layout) specifications. The goals are to evolve the specifications to handle a broader set of use cases and technologies, including use of music notation on the web, while maximizing the existing investment in implementations of the existing MusicXML and SMuFL specifications.

The group is developing a new specification to embody this broader set of use cases and technologies, under the working title of MNX. The group is proposing the development of an additional new specification to provide a standard, machine-readable source of musical instrument data.

w3c/smufl
Group's public email, repo and wiki activity over time

Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.

final reports / licensing info

date name commitments
MusicXML Version 3.1 Licensing commitments
SMuFL 1.3 Licensing commitments
SMuFL 1.4 Licensing commitments
MusicXML 4.0 Licensing commitments

Chairs, when logged in, may publish draft and final reports. Please see report requirements.

Publish Reports

Co-chair Meeting Minutes: April 16, 2019

MNX-Common

  • Discussion of #4. Adrian will post a comment saying that we are going to encode written pitch. He’ll follow it up with a pull request to change the spec to fill that part in. Initial pull request will clarify that we will encode written pitch rather than sounding pitch. We will also add definitions for written pitch and sounding pitch to the specification.
  • Some discussion about whether “written pitch” includes octave transposition (e.g. from octave lines or clefs with octaves indicated). Michael proposed that we should use the same approach as MusicXML, i.e. to include the totality of octave transpositions. Daniel and Adrian were unsure about this, since on its face it goes against the views expressed in the meeting by developers that they want the pitch encoded to match the displayed page, but Michael thinks we should do what MusicXML does and see whether there are objections.
  • Michael to see whether there is a good definition of written vs. sounding pitch in the MusicXML schemas or tutorial.
  • Discussion of #138. Adrian will post the same comment as in issue #4. We propose to then close this issue after bringing the relevant parts of Joe’s original proposal for realisations and layouts to the other issues, e.g. #34 for differences between full scores and parts, and issue #57 for system/page flow.
  • Following on from the pull request, Adrian will review Christina Noel’s proposal for how pitch might be encoded and consider whether or not this could be the basis of the approach.

MusicXML 3.2

  • We discussed how we should handle additional issues for MusicXML 3.2, including Cyril’s proposal for handling swing, and whether Michael should bring those issues into scope for the next release. We agreed that if Michael is happy to handle the specification duties, he can add them to the milestone without heavyweight co-chair review.
  • Adrian expressed concern that the approach suggested in #283 is possibly not the most semantic approach, and could break display and playback into separate elements. Michael believes these can indeed be combined in one element and address Adrian’s concern.
  • We discussed what to do with issues in the MusicXML repository that we have agreed to handle in MNX-Common. Michael will link them to an existing or new issue in the MNX repository and then close the issues in the MusicXML repository.
  • In the next week or so, Michael plans to put together the first pull requests for MusicXML 3.2, e.g. upping the version number and switching back to the Contributors License Agreement (CLA) for the development period, etc.

The next co-chair meeting will be on 30 April 2019.

Musikmesse 2019 Meeting Minutes

The W3C Music Notation Community Group met in the Apropos room (Hall 3.C) at Messe Frankfurt during the 2019 Musikmesse trade show, on Thursday 4 April 2019 between 2:30 pm and 4:30 pm.

CG co-chairs Michael Good, Adrian Holovaty, and Daniel Spreadbury chaired the meeting, with 29 members of the CG and interested guests attending. The presentations from the meeting are posted at:

W3C MNCG Musikmesse 2019 Presentation

Daniel Ray from MuseScore recorded the meeting and has posted the video on YouTube. The video starting times for each part of the meeting are included in the headings below.

Attendees (3:42)

After Michael gave an introduction to the Music Notation Community Group, we started the meeting by having each of the attendees introduce themselves. Here are the attendees in alphabetical order by organization:

  • Dominique Vandenneucker, Arpege / MakeMusic
  • Dorian Dziwisch, capella-software
  • Dominik Hörnel, capella-software
  • Markus Hübenthal, capella-software
  • Bernd Jungmann, capella-software
  • Christof Schardt, Columbus Soft
  • Matthias Leopold, Deutsche Zentralbücherei für Blinde
  • James Sutton, Dolphin Computing
  • Karsten Gundermann, self
  • Bob Hamblok, self
  • James Ingram, self
  • Simon Barkow-Oesterreicher, Lugert Verlag
  • Mogens Lundholm, self
  • Michael Good, MakeMusic
  • Daniel Ray, MuseScore / Ultimate Guitar
  • Gerhard Müllritter, Musicalion
  • Johannes Kepper, The Music Encoding Initiative
  • Tom Naumann, Musicnotes
  • Christina Noel, Musicnotes
  • Reinhold Hoffmann, Notation Software
  • Martin Marris, Notecraft Europe
  • Heiko Petersen, self
  • Alex Plötz, self
  • Dominik Svoboda, self
  • Martin Beinecke, SoundNotation
  • Adrian Holovaty, Soundslice
  • Frank Heckel, Steinberg
  • Daniel Spreadbury, Steinberg
  • Cyril Coutelier, Tutteo (Flat.io)

capella-software Sponsor Introduction (8:10)

capella-software sponsored this year’s meeting reception. Dominik Hörnel, capella’s CEO, introduced the company and its product line. Most of the company’s products support MusicXML.

capella is following (and sometimes contributing to) the MNX discussions with great interest. Dominik thanked Joe Berkovitz for his pioneering work on MNX and welcomed Adrian for continuing Joe’s work.

Introduction from Adrian Holovaty (12:49)

Adrian offered his own introduction given that this was his first meeting while serving as co-chair. He works on a web site called Soundslice which is focused on music education. Soundslice includes a notation rendering engine which consumes and produces MusicXML. In another life, he was the co-creator of the Python framework Django, from which he has retired as one of the Benevolent Dictators for Life. He is looking forward to continuing Joe’s work on MNX as co-chair of the MNCG.

SMuFL 1.3 and 1.4 (14:25)

Daniel provided a quick update on SMuFL 1.3 and SMuFL 1.4. There are currently 16 issues in scope for SMuFL 1.4, including improvements to font metadata, numbered notation, and chromatic solfège. More issues are welcome.

Alex asked how many glyphs are included in SMuFL so far. Daniel wasn’t sure (he believes around 3,500) but will find out. After the meeting he reported that SMuFL 1.3 has 2,791 recommended glyphs and 505 optional glyphs. The Bravura font currently has 3,523 glyphs, including 227 glyphs that are duplicated at standard Unicode code points.

MusicXML 3.2 (18:49)

Michael presented the current plans for an MusicXML 3.2 release, developed together with the group’s ongoing work on MNX. There are currently about 20 open issues in the MusicXML 3.2 milestone, focused on improved support for parts, improved XML tool support, and documentation clarifications. MusicXML 3.2 does not try to address any of the new use cases for MNX, or feature requests that are better handled in a new format that does not have MusicXML’s compatibility constraints.

Michael opened up discussion about whether the group believes it is a good idea for MusicXML 3.2 development to proceed in parallel with MNX development. Christina expressed concerns about splitting the group’s energies between MusicXML and MNX, but that this also is a question of when we expect to see MNX-Common finished. Reinhold followed up saying that it is not just when MNX-Common is finished within the community group, but when will it be implemented by major vendors for both import and export, and when there will be music available in MNX-Common format.

Adrian believes that a huge part of the solution to MusicXML and MNX-Common co-existence and migration is to have automated converters between MusicXML and MNX-Common. Adrian was more optimistic than Michael on the timeline for finishing MNX-Common within the community group.

Daniel Ray believes that MusicXML and MNX will co-exist for some time. Specification, conversion, and adoption are three phases for MNX development. He asked if there are ways to speed up MNX and MusicXML development. For instance, could MusicXML development be broken up into smaller fragments by released more frequently? Michael responded that having more contributors and more early implementations can help to speed things along. Given MusicXML’s compatibility requirements, it is important to have multiple working implementations of major new features before a final MusicXML version release.

James Sutton asked about the need for parts support, since parts can already be generated from the data in the score. Michael replied that formatting, for example, is a current customer concern. Manual formatting that is not the same as automatic formatting currently gets lost in transfer between programs.

James Sutton also asked about using differences between score and parts instead of full copies of both score and parts. Christina and Michael replied that this is more the approach that MNX adopts. Specifying differences can get very complex very quickly. MusicXML’s simpler approach can lead to a faster release, while we work out a solution for MNX-Common that is more appropriate for native applications.

James Ingram is in favor of continuing MusicXML and MNX as separate projects, but we need to keep clear the interface and boundaries between the two projects. We do not know if MNX is going to succeed, so we need to keep improving MusicXML in the meantime.

Frank elaborated on an earlier question from James Sutton about importing score and part differences into a notation program if MusicXML 3.2 treats them as separate sets of data. Michael replied that this is already implemented for the next maintenance update for Finale, as well as for a future implementation of SmartMusic. These implementations use XML processing instructions rather than the new features planned for MusicXML 3.2. We will want to test this during MusicXML 3.2 development to make sure that other vendors also find this usable for their products with their different implementations of score and part relationships.

Michael summarized the results of the discussion as having general support for continuing MusicXML development alongside MNX development. However we should be careful to limit and target the scope of MusicXML releases so we do not slow down MNX development. This sense of the room matched what we heard earlier in the year at the NAMM meeting.

DAISY Consortium and Braille Music (49:50)

Matthias Leopold from the Deutsche Zentralbücherei für Blinde (DZB or German Central Library for the Blind) introduced the work of the DAISY Consortium on Braille music.

Braille music is the international standard for helping blind musicians, developed by a Parisian blind organist. Organizations around the world provide manual braille music translation. This relies on a diminishing pool of expertise, so the goal is provide fully automatic translation of music scores into braille music. Braille music has some specific challenges because of the optical nature of printed music notation compared to the more semantic nature of Braille notation.

Some software is at least partially accessible to blind musicians – for example, capella has supported this for a long time. But there are lots of problems with programs such as Sibelius – and it is necessary to provide better software tools.

The project is an initiative of Arne Kyrkjebø from the Norwegian Library for the Blind. Dr Sarah Morley-Wilkins is coordinating the project from the UK; they are relying on the input from Haipeng Hu, a blind musician from China.

Because Braille music relies on semantic relationships, not just optical relationships, there are issues that need improvement with original source files, conversion tools, and making sure that all concepts can be expressed directly in MusicXML and MNX-Common.

Simon from Forte asked if there was a good way to evaluate the quality of the MusicXML files exported from his tool. Matthias replied that this is a goal of the group (to have a tool to do this) but it is a difficult problem – how can you automatically evaluate what the right results are?

Daniel Ray commented that there is an initiative at MuseScore to improve support for braille music in the OpenScore project.

Martin Marris commented that the change to using a Qt UI in Sibelius 7 prevented access to the UI elements. Matthias says he’s not seen any blind users running Sibelius but his understanding is that it is the most accessible of the main notation programs.

MNX (59:19)

Adrian provided an update on Joe Berkovitz’s whereabouts (he is a successful sculptor and a grandfather) and what Adrian has been doing for the past few months. He has put together an introductory document for MNX-Common to explain what it is intended to be and why we are doing it. This is posted on the MNX GitHub page.

A second thing we have done recently is to decide that MNX-Common and MNX-Generic would be two separate formats. The original idea was that an MNX file could be a more generic SVG-type file or a more semantic MNX-Common file, and that the application opening the file would have to decide what to do about it. We decided against this for three reasons: it would be confusing for end users, confusing for developers, and there is no huge benefit in combining them. So we have decided to split the specification into two, but have not yet done the work.

Bob asked whether we will call the two formats different things. Adrian answered that MNX was always intended to be a codename, but we need to settle on some names soon. Michael said that we would like the names to indicate that they are a family of specifications, since we would like to provide semantic support for other kinds of music notation in the future.

Written and Sounding Pitch Introduction (1:05:25)

We have a fundamental decision to make for MNX-Common: should pitches be represented with written pitch or sounding pitch? These pitches can be different for transposing instruments either in parts or a transposing score.

Adrian outlined the options of storing sounding pitch, storing written pitch, and variations that allow duplicate or alternate pitches. This question of how to represent pitch gets at some big questions about MNX-Common:

  • Are we encoding musical ideas or musical documents?
  • Do we prioritize the performer’s or listener’s perspective?
  • What is the ground truth: the sound or the visual display?
  • How important is XML readability vs. a reference implementation?
  • Is redundancy between score and parts inevitable, or is every part-specific notation derivable from a score given the right hints?

Written and Sounding Pitch Discussion (1:15:02)

James Sutton said that philosophically, data duplication is terrible as it introduces errors due to mismatches, so we should store minimal data. Because music is sound, we should store the sounding pitch, and the transpositions can be handled algorithmically.

Christof advocated for the exact opposite, storing written pitch. We didn’t enjoy programming MusicXML. It suffered from different implementations from different vendors and was too flexible. Developers need to have a comfortable, understandable format. He likes being able to see the direct correspondence between the page of music he is looking at and the XML markup, and not be forced to transpose mentally. Any kind of transposition will often cause changes for stems, slurs, and much more. Make it as joyful as possible for developers to implement this format. The decision should be guided by practical concerns more than philosophical questions.

Adrian asked whether having a reference implementation would help? James Ingram said that the reference implementation would not be a substitute for the readable XML markup. Christof said that a reference implementation would not fit into his development workflow.

Johannes disputed that philosophically music is sound. If so, we have never heard music from Bach, Beethoven, or Mozart since all we have from them is paper. Printed or written music is also music. Technically, it’s hard to understand transposing instruments when scanning a printed score using OMR. It would be easier for scanning software to adjust the transpositions than to change all the pitches. He sees no reason to encode the sounding pitch as primary, though both written and sounding pitch could be present – that is a separate discussion. If only one pitch is to be encoded, it should be the written pitch. This is also the way both MEI and MusicXML do it.

Christina says that from a philosophical standpoint the piece of sheet music is the composer’s way of communicating to the musician. Sound is the end result of the musical idea. However, as a person who works for a publishing firm, we work very hard on the details of the displayed music notation. These details need to be correctly specified in the files we are importing and exporting. Even if we are going to turn it into something else, we need to be able to reproduce the original written document. She feels we should write both the written pitch and the sounding pitch so that both can be available.

Dominik Svoboda thinks both sound and the visual display are ground truth. If parts are messy, you annoy all of the players in your orchestra. Visual display is equally important to the sound. Perhaps we could use AI or neural network technology to bridge the gaps between written and sounding pitch?

Daniel Ray questions whether there is a consensus for what MNX is for: is it for the composer, for the performer, for the developer? Everything should be in service for the end user; anything that simplifies something for the developer should only be in service of simplifying things for the end user. What excites him is a common native format, because it would maximize everybody’s investment in the format compared to an exchange format. On the subject of what should be in the file format: transposition shouldn’t be in there. Instead, the instrument should be defined, so that the software can infer transposition information based on knowledge of the instrument. We may also be too focused on a fixed presentation format, e.g. a publisher’s printed page; instead we should be designing and thinking about a more fluid representation of music notation. What are the unique advantages of digital technology versus making more portable paper?

Christina said it’s important that information about layout must be possible to encode, else publishers won’t buy in. But these kinds of visual things should be as separate as possible from the representation of the semantic information. Musicnotes supports the idea of responsive layouts, but it is difficult to do that and have everything still look nice. If you have to specify every single written and sounding pitch for every note, it becomes very complicated very fast. Written spellings are very important for layout purposes.

Daniel Ray asked what delegations should exist. Currently the publisher and the composer decide everything and the consumer simply consumes what they’re given. In the digital world we can delegate some of those decisions to the user, e.g. read a clarinet in B flat part in alto clef if they really want to. Adrian said that regardless of what solution we come up with, that will always be possible. Changing instruments, transpositions etc. on the fly will always be possible, regardless of the decisions made in the choice of pitch encoding.

Bernd strongly supported Christof’s initial point to encode the written pitch. If we were only going to encode sounding pitch, we wouldn’t need to encode anything beyond what MIDI can define. It seems impossible that any automated conversion from MusicXML from MNX will be 100% accurate. Longer term the need will be for dedicated MNX import and export to avoid a lossy intermediate conversion to or from MusicXML. There are many cases where enharmonic decisions are editorial choices, and those choices need to be encoded. The capella format uses written pitch. The people dealing with scores are the players, and what they are talking about – the written pitch – is the important thing.

Reinhold agrees with both Christof and Bernd. We should use written pitch, capturing the editorial decisions while still being able to play the music correctly.

Mogens says that for him music is sound, and resolving the transposed pitch is possible algorithmically if we know the source and destination keys. He likes the idea of encoding both pitches because then it makes everything possible, and the redundancy is not so bad. For instance, he likes the idea of making it possible to specify a different playback pitch without affecting the notation to capture the specifics of a particular mode or idiom, e.g. microtonal inflections in folk music.

James Ingram thinks that doubly encoding written and sounding pitch is the answer, especially for being able to handle microtones. For sounding pitch we could use an absolute pitch. This could be done by combining a MIDI note number with cents, e.g. 60.25 would be a quarter-tone higher than middle C.

Based on his experience with making braille translations, Matthias believes that software that needs to do different things needs different formats. Building one format for programs that could do everything gets very complicated. We might want do divide into four or five different formats for different types of applications, and provide transformers between those formats.

Cyril would prefer to have the written pitch encoded. Whatever we choose, we need to be able to transpose accurately. This requires both the chromatic and diatonic information for transpositions. MusicXML does not require both, which causes problems. We should try to fix this for MNX-Common. We also need layout information, including global layout and part layout.

Bob said we are encoding notation, and the performer is using notation to make music for the listener, so we need to focus on the performer rather than the listener. From this perspective it is clear that we must encode the written pitch.

Frank said that the harp is another interesting example. Even though it is not a transposing instrument, it is often written in an enharmonically different key because of the technical way the instrument is played. For instance, B major will often be notated C flat major for the harp. This type of special case should not be forgotten in the encoding, however it is done.

Dominique does not want the pitch data to be duplicated to avoid consistency issues. We need to focus on encoding notation because this format is for notation apps, not MIDI sequencers. Encoding written pitch is better for notation apps because it is one line of code to get to sounding pitch, but the reverse is not true. If we want to use this format as a native format, we need the data to match what is displayed on the screen as much as possible. If we have to transpose the notation on the fly just to display it, that will be difficult to do and slow down the native apps, while transposing for playback is very easy to do.

Martin’s only objection for using sounding pitch is that the enharmonic choice of the written pitch is an editorial decision. Editors often make enharmonic changes in parts and it’s important to be able to encode them.

Simon said that there could be programs that don’t care about sound at all. Having the written pitch as the primary data is easier for those programs to deal with.

Dominik also spoke in favor of using written pitch. We all agree that we have different perspectives on the music and that there are semantic, performance, and presentation descriptions that should all be present in the specification. The semantics are about music notation: not about how music sounds, but how it is written. Therefore we should use written pitch.

As a non-developer, Tom would also opt for the written pitch. We are trying to encode the recipe for the muffins, not the muffins themselves.

Christof says that Joe started this work defining a set of roles but missed developers. We should try to make it fun to develop using MNX. End users will benefit from developers joyfully bringing them these features.

Michael closed the meeting. It was great to hear from many more voices that had not been present in our online discussions. For next steps, the co-chairs will now come up with a final proposal to present to the group.

Co-Chair Meeting Minutes: March 26, 2019

In this short meeting, we talked about preparation for our co-chair presentation at Musikmesse next week in Frankfurt.

We committed to putting together our slides by this Friday. Adrian will combine them in a single document for efficient presenting.

We discussed the Musikmesse presentation topics for MusicXML. Michael is planning to cover “themes for the MusicXML 3.2 release,” perhaps including a “What’s not in version 3.2” slide, with things we want to put off until MNX. We also want to make sure there’s buy-in among the Frankfurt meeting attendees that moving forward with MusicXML and MNX simulaneously is the right thing to do (NAMM attendees this past January had confirmed this, but it would be nice to get Musikmesse attendees’ thoughts too).

For the issue of sounded-vs-written pitch in MNX, our previous plan was to bring a co-chair recommendation to Frankfurt, but the co-chairs are still disagreeing. 🙂 And we’ve had a spirited GitHub issue discussion, but only a small handful of people have contributed to it. With that in mind, we’ll bring the discussion to Frankfurt, to get more people’s thoughts.

We realized we don’t yet have an MNX GitHub issue about the open question of CSS-style separation of content from layout. Adrian will create an issue for this.

Co-Chair Meeting Minutes: March 19, 2019

MusicXML 3.2

We have now created a V3.2 milestone in the MusicXML repository with an initial set of issues. Michael has gone through all the existing MusicXML issues, closing some answered questions and responding to others. Every open issue now has at least one descriptive tag.

One theme for the MusicXML 3.2 release is improved support for parts as well as scores. Another possible theme would be tooling, including support for XML code generators and XML catalogs.

A common theme among many of the open MusicXML issues is adding more semantics to things like text and lines, where MusicXML is more descriptive. The co-chairs agreed that this is still something we want to address in MNX-Common rather than in a MusicXML update.

Michael will pull the parts and tooling stories into the V3.2 milestone along with several documentation stories. We can then discuss this initial milestone issue proposal at Musikmesse. We expect to be removing and adding issues for a while yet as we plan for this next release.

Adrian proposed the idea of releasing an open source MusicXML cleaner tool for handling some of the common problem areas from older MusicXML files that people try to import. As we discussed this idea, we wondered if perhaps we could provide documentation on how to cleanup these common problems.

MNX

We reviewed Adrian’s latest draft of an update to the MNX README.md file, which will provide a better introduction to and motivation for MNX. This will be published soon after some small changes are made based on the review feedback.

We discussed the community feedback on splitting MNX-Generic and MNX-Common into separate formats. There seemed to be wide support for this change. There still would be value in keeping some common branding for these two formats as well as future semantic music formats for other repertoires.

Adrian will be posting a reply to this issue detailing our current plans to separate the two formats into separate specifications. This will be followed by a pull request. We can revisit naming and branding issues at a later date.

We discussed the current status on the issue of written vs. sounding pitch, currently being discussed as part of a broader realizations and layouts issue. Michael had been holding off on providing his views in order to facilitate discussion from the rest of the community. He will post his views in favor of written pitch soon, so that Adrian can proceed with a consolidated pros-and-cons analysis of each approach. We plan to post that analysis on the GitHub issue and on this blog.

Musikmesse

Michael will update the Musikmesse blog post with more agenda details and share with the group. Daniel will present on SMuFL, Michael on MusicXML, and Adrian on MNX. We will need to work out the details of the DAISY Braille Music group presentation.

With Musikmesse coming up in just over two weeks, our next co-chair meeting is scheduled for Tuesday, March 26.

SMuFL 1.3 Published as a W3C Community Group Report

A week ago, on 5 March, the W3C Music Notation Community Group reached another significant milestone, with the publication of the Standard Music Font Layout (SMuFL) specification, version 1.3.

We addressed some 49 issues in SMuFL 1.3, expanding the repertoire of characters in the standard (including significant new ranges for German organ tablature and Kahnotation), and clarifying aspects of the specification.

You can peruse the online version of the SMuFL 1.3 specification, or download the release from GitHub (which includes the various JSON metadata files).

Attention now turns to a potential SMuFL 1.4 release, for which we have created a milestone in GitHub and to which we have assigned the majority of the outstanding issues. If you have any ideas or proposals that could form part of the next SMuFL release, please create an issue so that we can begin discussion.

We will provide a short wrap-up on SMuFL 1.3 and a quick look ahead to SMuFL 1.4 at the forthcoming meeting at Musikmesse. If you are planning to attend, but haven’t yet let us know that you’re coming, please do so now. We look forward to seeing you there!

Co-Chair Meeting Minutes: March 5, 2019

SMuFL 1.3 final report published

Michael and Adrian congratulated Daniel on the publication of the spec, and the co-chairs ceremonially made their licensing commitments. Daniel agreed to write a blog post announcing the release of the report and requesting that others make their own licensing commitments.

Introduction to MNX

Michael and Daniel have provided feedback to Adrian on the draft of the introduction to MNX that he has written.

Adrian was struggling to fit MNX-Generic into the narrative of the post and wonders whether now MNX-Generic is sufficiently different from MNX-Common that it should perhaps be a completely separate format. He has three reasons:

  1. It’s confusing for users to have two kinds of MNX. Which one should users choose? How would the user understand the differences in the level of semantics between the two formats?
  2. It’s confusing for developers, too. The type of development work required to parse something that is essentially an SVG wrapper versus a semantic tree-based document model is almost completely different. Adrian’s guess is that the majority of developers would only want to do one of them.
  3. There’s no obvious benefit in having them wrapped together in the same format apart from some negligible stuff like sharing the metadata format.

Michael expressed agreement with all of these points. Part of Joe’s motivation for the decision to bring MNX-Generic into the standard was to attract people from other communities, e.g. from the MEI community. But having a family of specifications maintained by the same organization may be sufficient to address that goal.

This is already captured as issue #98, which the co-chairs agreed to place under active review.

Michael also pointed out that the MNX names are still considered code names, so we could even decide to call them different things at the end. Daniel expressed agreement with Adrian’s points.

Michael said that rather than the naming issue, the more critical issue is the notion of whether MNX is a generic container format. In many ways an MNX container would be no more useful to a software application than e.g. a zip file. It would not be obvious without looking inside it what it contains and what to do with its contents. The co-chairs agreed to defer further discussion on the container format until the wider issue is settled.

Adrian clarified that he thinks it’s crucial to do the job of MNX-Generic but the issue is how the two formats are positioned.

The co-chairs are in broad agreement that we should focus MNX on the semantic format currently known as MNX-Common and move the MNX-Generic format to a separate specification. We will invite feedback from the user community about whether we are overlooking any advantages to the current approach, for feedback about the idea of separating them, and to solicit ideas for names for a newly-separated format.

Michael will slightly reorganise the order of the pages on the Wiki to make sure the spec is prominently displayed there.

Adrian’s introduction will be published as the README.md in the MNX repository, so it will eventually appear as the root of https://w3c.github.io/mnx.

CSS in MNX

Adriam raised the issue of using CSS-like styling in MNX, which is currently part of the draft specification. Although he thinks it’s conceptually a beautiful idea, it will probably be impractical to embed CSS syntax directly in the format because it will require the building of a whole parallel parser. Michael agreed, and there was brief discussion about how we should take the aspects of CSS that work for us and build an XML-friendly approach that fits into the existing XML parser. We know that this was something of a controversial approach among the community for this reason anyway.

The co-chairs reflected that all of the decisions that have been taken to date on MNX are still fungible, provided we have consensus from the community.

Sounding vs. written pitch

Adrian will review the recent discussion on issue #138 concerning sounding vs. written pitch and bring it together into a set of pros and cons. The aim is still for the co-chairs to have sufficient time to come up with a concrete proposal that takes the discussion into account ahead of the Musikmesse meeting. We aim to formalise this proposal in our next meeting in two weeks.

DAISY Music Braille

We haven’t yet received the promised documents from the DAISY Music Braille group, which were expected on 1 March. Daniel reported that Matthias Leopold from DZB in Germany plans to attend the Musikmesse meeting.

Preliminary Musikmesse meeting agenda

Michael will send an email reminding people to sign up to let us know they’re coming and to solicit agenda items. The current preliminary agenda looks like this:

  • SMuFL 1.4 plans (Daniel will assemble an issue list etc.)
  • MusicXML 3.2 plans (Michael will assemble an issue list etc.)
  • DAISY Braille Music group and its requirements (Daniel will ask Matthias if he wants to speak)
  • MNX topics (probably MNX-Generic vs. MNX-Common, CSS in MNX, sounding/written pitch)

Michael will contact Peter Jonas at MuseScore to see if they will be able to provide an audio and/or video recording of the meeting.

CLA enforcers for GitHub

Adrian has done some research into CLA enforcers, and identified a number of possibilities to ensure that community members sign up to the CLA before they make contributions (definitely for pull requests, and possibly for raising issues) so we can be sure of compliance. Adrian will send the candidates to the other co-chairs for discussion, and then we’ll run our proposal past our contacts at the W3C.

Next meeting

Our next meeting is scheduled for Tuesday 19 March.

Co-Chair Meeting Minutes: February 19, 2019

Minutes from the Feb. 19, 2019, co-chair meeting:

Our Community Group’s in-person meeting at the Musikmesse in Frankfurt is set for Thursday, April 4. We discussed travel logistics. All three co-chairs will be in attendance.

The SMuFL 1.3 Community Group Final Report is essentially done. Several community members have reviewed it, and we’re not planning any more changes. The remaining step is coordinating the publication with the W3C, to make it official.

Moving forward with SMuFL, Daniel has reviewed all the open issues in the GitHub issue tracker and moved outstanding ones to a new milestone for Version 1.4. He also closed a few others that he determined were out of scope.

In MusicXML news, Michael plans to create a Version 3.2 milestone in the GitHub issue tracker and do similar issue gardening. Michael also plans to make a proof of concept for a new idea regarding richer support for parts within a score; when he has it, he’ll create a proposal on GitHub.

The DAISY Music Braille Project has recently been in touch with us and wants to engage more deeply with our community. At the basic level, they’d simply like our community to be aware of them as a resource; on a deeper level, they have specific requests for new MusicXML/MNX features. Daniel has an upcoming meeting with a representative, and they’ll discuss the best way to triage these features. We’d also like to see whether any DAISY folks can attend our Musikmesse meeting.

We’ve recently received several ideas and bug reports in our GitHub issue tracker from folks who haven’t yet formally joined the community group — and, hence, haven’t transferred their IP rights to the group (i.e., signing the CLA). As such, to be safe, we can’t respond to these issues until they take that step. Aside from continuing to contact these people one-by-one, there might be a technical solution: requiring GitHub contributors to sign a CLA in order to post issues or pull requests. Adrian is going to investigate this.

In MNX news, there’s been some activity recently in the GitHub thread regarding pitch representation (MNX issue 138). Adrian is continuing to work on an MNX “overview for laypersons” document and plans to have a draft this week.

Adrian brought up a few MusicXML oddities he’s seen in the wild recently, and Michael gave historical context and thoughts on the best way to handle them. One of the out-of-the-ordinary notations Adrian came across — a C clef centered on the C staff line — got the co-chairs talking about MNX’s definition of clefs (which is yet undesigned) and how it should perhaps let people specify a staff position (i.e., including staff spaces) instead of only staff lines. Adrian will file this in the MNX issue tracker.

Our next meeting is scheduled for Tuesday, March 5.

Musikmesse Meeting on 4 April 2019

[Edited on 19 March with updated agenda]

We are pleased to announce that we will have a face-to-face meeting of the W3C Music Notation Community Group at the Musikmesse in Frankfurt. We look forward to this event each year as we usually have 30 music notation experts participating in the discussions.

This year’s meeting will be Thursday, 4 April 2019 from 2:30 pm to 5:30 pm in the Apropos meeting room in Hall 3, Level C. This is the date that worked best from our Community Group poll, and fits in with the new Musikmesse schedule that runs Tuesday through Friday.

As in past years, we will have a 2-hour meeting followed by a 1-hour reception. This year’s reception will be sponsored by capella-software.

Our current meeting agenda topics are:

  • SMuFL 1.4 plans and current issue list
  • MusicXML 3.2 plans and current issue list
  • DAISY Braille Music group and its requirements
  • MNX-Generic vs. MNX-Common
  • CSS in MNX
  • Sounding and written pitch in MNX

You will need a Musikmesse trade visitor ticket to attend the meeting. These cost 30 euros and are available online at www.musikmesse.com.

Please sign up on our Google form at https://goo.gl/forms/mIz5rFqyguRX43ku2 if you plan to attend the meeting. This will help ensure that we have enough room and refreshments for everyone.

We look forward to seeing you in Frankfurt!

Best regards,

Michael Good, Adrian Holovaty, and Daniel Spreadbury
W3C Music Notation Community Group co-chairs

Co-Chair Meeting Minutes: February 5, 2019

One of the suggestions made at the Music Notation Community Group NAMM meeting was to publish the minutes of the co-chair meetings, which generally happen every 2 weeks. Here is the first of these minutes.

We discussed the status of the SMuFL 1.3 Community Group Final Report. Two small edits have been made to fix typos in the published draft. A few other people have reviewed this draft without requesting changes. We plan to send out a reminder next week about reviewing the report.

If all looks good, we plan to publish the Final Report the week of February 18, 2019. Daniel will contact W3C staff to see if there are steps we need to take on GitHub before publishing the final report. We need to better understand exactly what gets copied to the W3C site when the report is published. The SMuFL report is more complex in this respect than the MusicXML report we published earlier.

We discussed the document that Adrian is writing about positioning of MNX and MusicXML. This will likely be initially published on the group Wiki, with a goal of evolving it into a preamble for the MNX specification. The MNX preamble could work like the SMuFL report preamble, detailing the motivation for the specification before diving into the details. This document will then be followed by Adrian’s blog post asking for guidance on pitch representation issues from a larger developer community beyond the Community Group.

Michael has not yet heard back from Musikmesse about a room for a Community Group meeting. He plans to follow up with Musikmesse later this week.

The next co-chair meeting is scheduled for Tuesday, February 19.

NAMM 2019 Meeting Minutes

The W3C Music Notation Community Group met at Room 201C in the Anaheim Convention Center during the 2019 NAMM trade show, on Friday, January 25, 2019 between 3:00 pm and 4:00 pm.

The meeting was chaired by CG co-chairs Daniel Spreadbury and Michael Good, and was attended by 25 members of the CG and interested guests. The slides from the meeting can be found at

W3C MNCG NAMM 2019 Meeting Slides

Philip Rothman from the Scoring Notes blog video recorded the meeting and has posted it on YouTube. Dominik Svoboda audio recorded the meeting and his audio is included in the video. The video starting times for each part of the meeting are included in the headings below.

Introduction to the W3C MNCG (Starts at 0:02)

Daniel Spreadbury introduced the W3C and the Music Notation Community Group. Daniel reiterated the importance of joining the group in order to make any contributions to the work of the group on SMuFL, MusicXML, and MNX. The web user interface for joining the group is not very easy, so please reach out to one of the co-chairs if you have difficulties.

Daniel discussed the changes in the group over the past year, in particular the change of co-chair from Joe Berkovitz to Adrian Holovaty. Given Joe’s absence, MNX was largely static over the last 6 months of 2018, but we look forward to resuming progress with Adrian’s leadership on MNX. Daniel will continue to lead work on SMuFL and Michael will continue to lead work on MusicXML. Adrian was not able to attend this year’s NAMM but does plan to attend our Musikmesse meeting.

SMuFL (Starts at 7:39)

Daniel Spreadbury led a discussion of the current status and future plans for SMuFL. Our immediate goal is to release a W3C Community Group Final Report for SMuFL 1.3. There was a previous draft 1.2 version of SMuFL, but this did not make it to an official Community Report.

At the time of the meeting the draft of the final report was close to publication but not yet released. (It was published at https://w3c.github.io/smufl/gitbook/ a few days after the meeting.) The main changes include new ranges for Kahnotation and German organ tablature, along with extensions to a few existing ranges and more stylistic alternates.

Currently there are 7 fonts available that support SMuFL: Bravura, Petaluma, Gootville, Leipzig, November 2, and MTF-Cadence. Bravura is intended as the reference font for SMuFL and contains all the glyphs that are part of the SMuFL 1.3 report. The other fonts cover the common music notation ranges but not the complete list of SMuFL ranges.

After the final Community Group Report is released for SMuFL 1.3, SMuFL will continue to evolve. There are around 30 issues currently outstanding that were not addressed in 1.3, and a 1.4 release is possible later in 2019.

We look forward to reviews of the draft report. The SMuFL GitHub repository is the preferred location for reporting issues. We expect to have at least 2 weeks for review after the draft report is published.

Hans Landval Jakobsen asked if there are any font technology requirements such as TrueType for SMuFL fonts. Daniel replied that there are no font technology requirements aside from being able to support code points in the Private Use Area of the Unicode Basic Multilingual Plane. All SMuFL fonts released so far have been OpenType fonts, including OpenType fonts with PostScript outlines.

MusicXML (Starts at 22:06)

MusicXML 3.1 was released in December 2017 so no changes have been made in MusicXML since then. Companies have been busy implementing MusicXML 3.1 features over the past year.

Michael proposed starting work on a new version of MusicXML in parallel with ongoing MNX development work. This is a change from the previous plan where we wanted to wrap up MusicXML 3.1 development to focus on MNX. However, with the changes in the working group co-chairs and evolving needs for MusicXML exchange, it seems to make sense now to work on both in parallel.

Michael discussed potential issues that could drive a MusicXML 3.2 release. The exchange of information about the differences between score and parts has become a key pain point for MakeMusic’s work in exchanging between its Finale and SmartMusic products. This is something MakeMusic would like to see resolved in a standard fashion, rather than a MakeMusic-specific workaround. A related issue is MusicXML 3.1’s incomplete support for concert scores that are associated with transposed parts. Currently there are 92 open MusicXML issues at Github, so there is plenty of scope for choosing what to work on for a 3.2 release.

Doug LeBow asked if any of the major music preparation houses has seen the list of potential MusicXML issues. Michael expects that nobody has looked at this list in a long time, but now would be a good time for people to join the Community Group to help drive the selection of MusicXML 3.2 issues. The list of issues could also be cleaned up and tentatively organized into a MusicXML 3.2 milestone.

Jason Freeman asked if there was a reference implementation for MusicXML, or could there be? The Community Group had earlier decided to postpone the issue of a reference implementation to work on MNX. For MusicXML, Finale is probably the most complete implementation, but it is not a reference implementation – it is neither complete nor open source.

Jeff Kellem suggested that building up a standard test suite over time could be an important contribution for developers. Michael agreed and thought this was more feasible than a reference implementation – we could start small and iterate over time, independent of MusicXML releases.

MNX (Starts at 39:35)

We have an initial draft MNX specification available at GitHub. The co-chairs believe it provides a solid technical foundation, but there is still a lot of work ahead to get to a complete draft.

Our plans for 2019 are to ramp back up on MNX development. We plan to start by finalizing a set of interrelated issues regarding pitch representation, including using written pitch vs sounding pitch and encoding the relationship between score and parts. We would like to have a discussion of this at our Musikmesse meeting and get these issues resolved shortly afterwards.

Doug said that MusicXML and MNX each have their own strengths, so developing them in parallel seems like a good idea. But he is unclear on what exactly MNX can add to what MusicXML already does. Michael responded that this is a common source of confusion, and Adrian is planning to write a document to help address what we are trying to accomplish with MNX.

Jeremey Sawruk suggested that if we are going to do MusicXML and MNX development in parallel, perhaps we could develop the test suites in parallel as well. That might clarify some of the differences between the two formats.

In general, the attendees at this meeting appeared to agree that doing MNX and MusicXML development in parallel was a good idea.

Next Steps (Starts at 48:15)

We discussed next steps for the Community Group, including our plans for meeting at Musikmesse, which will be held in Frankfurt between April 2 – 5.

Fabrizio Ferrari asked about ways to communicate more often aside from the two face-to-face meetings and the nitty-gritty of the individual GitHub issues. Perhaps the co-chairs could send out a periodic digest of the issues we are discussing? Chris Koszuta suggested sending this digest out on a weekend to avoid conflicts with an overload of other business emails. Jason suggested sending out summaries of the co-chair meetings.

Chris also suggested simulcasting the face-to-face meetings for people who could not travel to the USA or Europe, as the discussions tend to be different with the different attendees. This can be difficult due to lack of budget and the Internet service not always being reliable at shows. We are thankful for the work of Philip Rothman and Peter Jonas in making video recordings of these meetings which get posted afterwards, but these do not allow for real-time interactive discussion.

Doug suggested that we have interim video face-to-face meetings using Zoom, which he would be willing to host.

Michael suggested possibly using a Slack channel or channels for ongoing chat discussion. There seemed to be many Slack users attending this meeting, so this seems worth investigating.

Daniel reminded the group that we have greatly reduced notifications from our GitHub repositories. So if you unsubscribed or filtered these notifications in past because there were too many of them, please reconsider as the notifications now go out only at key points like pull requests.

The co-chairs will be discussing these suggestions and plan to make proposals for more effective communication within the Community Group.

Later that evening, many of the attendees met for our 3rd annual dinner at Thai Nakorn restaurant in Garden Grove.

Attendees

Franck Duhamel, Arobas Music
David Gros, Arobas Music
Hans Landval Jakobsen, EarMaster
Jason Freeman, Georgia Institute of Technology
Asa Doyle, Hal Leonard
Chris Koszuta, Hal Leonard
Jeremy Sawruk, J.W. Pepper
Doug LeBow, self
Johannes Biglmaier, M3C
Michael Good, MakeMusic
Dominique Vandenneucker, MakeMusic
Duncan Hearn, Musicnotes
Jon Higgins, Musicnotes
Steve Morell, NiceChart
John Mlynczak, Noteflight
Philip Rothman, NYC Music Services / Scoring Notes
Eric Carraway, percuss.io
Jennifer Amaya, Riverside City College
Jeff Kellem, Slanted Hall
Daniel Spreadbury, Steinberg
Dominik Svoboda, self
Pierre Rannou, Tutteo (Flat)
Jan Vasina, self
Fabrizio Ferrari, Virtual Sheet Music
Kevin Weed, self