Music Ontology revision 1.10: introduction of musical performances as events in time

I am pleased to publish the revision 1.10 of the Music Ontology. As you can see in the change log:

View the change log

many things changed in that revision of the ontology. In fact, all the ambiguities people noticed with the latest revisions should now be fixed. Yves Raimond worked hard to implement the Time Ontology along with its Event and Timeline Ontologies into the Music Ontology. The result is a major step ahead for the Music Ontology. Now different levels of expressiveness are available trough the ontology.

In fact, one can describe simple things like MusicBrainZ data and relations, or they can describe the recording of a gig on a cell phone, published on the web, by two different people. Take a look at the updated examples to see what each of these three levels of expressiveness can describe.

As you can see, all the examples are now expressed in N3 and XML. These examples are classified in three different levels of descriptiveness. Most people will describe musical things using the first level of expressiveness, however closed and specialized systems will be able to express everything they want related to music using the Musical Ontology (check examples level 2 and 3 for good examples of this new expressiveness power)

One of the last things we have to fix is how genres should be handled. Right now we are typing individuals with its genre. However considering that genres evolve and change really quickly and that they are strongly influenced by cultures, a suggestion has been made to create individuals out of the class Genre and then describing them. Also, we could create s mo:subGenre property (domain: mo:Genre; range: mo:Genre) that would relate a genre to its sub-genre(s).

This idea is really great and would probably be the best way to describe genres considering their “volatile” meaning over time. However the question is: how to link a mo:MusicalWork, mo:MusicalExpression, mo:MusicalManifestation and mo:Sound to its genre? If we create a mo:genre (domain mo:MusicalWork… etc; range rdf:resource), then people could use that property to link a MusicalWork, etc. to anything (anything that is a resource). Personally I think that it is not necessarily a good thing to introduce such a non-restricted property into the ontology.

Note that it is not the same thing as the property event:hasFactor since anything can be a factor of a musical event.

Now that the ontology is becoming pretty stable, the next step is starting to use it to describe things related to music. The first step will be to convert MusicBrainZ’s data into RDF using the Music Ontology. Soon enough I should make available a RDF dump of this data along with a Virtuoso PL that will enable people to re-create this RDF dump from a linked instance of a MusicBrainZ Postgre database into Virtuoso

Finally I would like to give a special thanks to Yves for its hard work and involvement for the publication of that new revision of the Music Ontology.

Technorati: | | | | | | | | |

6 thoughts on “Music Ontology revision 1.10: introduction of musical performances as events in time

  1. The MO has advanced a lot since its first inception.

    It is interesting to see the MO address issues of time. It’s widely known that OWL has problems with time (which the OWL time ontology do not solve). To illustrate, let me take one example from the MO documentation:

    2 Britney Spears
    3
    4 …
    5

    6
    7 Kevin Federline
    8 …
    9

    OWL assumes a static world in which BritneySpears married_to KevinFederline holds eternally. This is not an accurate representation of reality. relations between continuants (3-Dimensional entities such as Britney Spears or Britney Spears’ hair) hold for temporal durations.

    BritneySpears stands in a married_to Jason Allen Alexander on 1/3/05, and in a married_to relation to Kevin Federline on 2/27/07. An OWL reasoner would conclude that Britney Spears is polyandrous.

    The solution in this case is to explicitly represent dependent continuants such as Marriages. The two different marriage instances that Britney Spears participates_in could be time-indexed. A similar solution could be applied to other relationships such as being in a band together. This would involve some changes to the MO.

    This solution is unsatisfactory for certain kinds of relations. For example, Britney Spears Hair was part_of Britney Spears on 1/3/05, but that same aggregate of hairs is not part_of Britney Spears on 2/27/07. Here, representing what are clearly relations as instances of relationship classes is unsatifactory, as are other solutions such as turning Britney Spears into a time-worm (the so-called perdurantist ontological position, in which there are no continuants such as Britney Spears, only processes, such as the process of Britney Spears life and parts of Britney Spears life).

    This may not be such a problem for the MO, which does not in general deal with changing anatomical features of artists; the primary entities in MO are processes such as performances where this problem does not arise.

  2. hmm, cutting and pasting RDF/XML doesn’t appear to work here. The example was the 4th one here:
    http://pingthesemanticweb.com/ontology/mo/#sec-example-level1

  3. Sound is both an Event and a MusicalExpression. Surely there are sounds that are not musical expressions?

    The comment for sound is:

    A subclass of MusicalExpression, representing a sound.

    I’m not sure if this is intended to be a text definition. If so, it commits the use/mention mistake. May I suggest the wording of the definition denotes the instances the class represents. Instead of “A subclass of MusicalExpression”, which not say “A MusicalExpression” (assuming this definition is correct)?

    According to this comment, A sound represents a sound. This is circular.

    I think you want to mention the phsyics of sounds (and possibly their perception as well – you should take a position on the hoary old philosophical chestnut about sounds in a forest devoid of listeners).

    Meriam Webster has:

    mechanical radiant energy that is transmitted by longitudinal pressure waves in air or other material medium and is the objective cause of the sensation of hearing

    Which seems reasonable.

    However, this would cause problems for electronic music in which the “recording” is made from entirely synthesised sounds
    Can I use the MO to express statements like:

    Theremins sound spooky to me

    Black metal sounds like cats being strangled

    What about found sounds? And this:
    http://www.amazon.com/Stockhausen-Helikopter-Streichquartett-Helicopter-Quartet/dp/B00004NJJX

  4. Hi Chris!

    Yes – you are totally right, we consider the sound as being the “physical sound” itself. I think the cognition of the sound is thus not expressed here, but this could surely be an extension of the ontology! That’s one of the main point now: the ontology provides “anchor points”, so we can act as a backbone ontology for other stuff: automatic feature extraction (often linked to the concept of Event), cognition (linked to this Sound concept), recording stuff (microphones, DAC, linked to Recording)…

    In the case of electronic music, that is an issue. We could maybe consider the sound as an informational object there? To be discussed…

    Btw, are you a member of the mailing list? Its archives are here:
    http://groups.google.com/group/music-ontology-specification-group

    It could be great to chat a bit more about all these points on this list! Could you send your comments there?

    Thanks,
    y

  5. A couple of points:

    Firstly, about the time varying properties like band membership etc. This is issue was forseen and
    is part of the reason why we adopted an event based knowledge representation so deeply into the
    system. In this case, one would declare a ‘Membership event’ which covers the time interval for which
    the membership holds. A query such as memberOf(‘BritneySpears’,X) would then only have meaning
    relative to a current time (ie it would be a modal query). The Event ontology is indeed an expression
    of the perdurantist view – the least offensive definition for an event we have found is ‘an event is a
    classification of a region of space-time (or just time if space is irrelevant).

    From this point of view, a Sound (event) is the classification of some region of space-time as containing
    or supporting an acoustic field – ie a *physical* thing. If the sound is recorded as it happens, there is
    another event which characterises the transduction of the acoustic field into a signal. At this point, we
    consider the signal to be an informational/mathematical objects and we no-longer need events to
    describe it.

    In the case of electronic music produced directly a digital signal, there is no recording and no sound
    until the signal played through a speaker. This is also described using two events – the resulting sound,
    and a ‘Playback’ event which mirrors a recording, and represents the realisation of an informational object (the signal) as a physical process (the sound).

    There is no obstacle to representing such things in the ontology. It just means that if you ask for recording events, or objects resulting from recording events, you won’t get this electronic music.
    You woudl have to ask for, say, signals which are ultimately derived from this or that Work – there
    would have to a property relating a Work (conceptual) or a score (informational) directly to the resulting
    signal. For example, if I use timidity to synthesise a midi sequence using certain parameters, there is
    a functional relation tying the midi score to the resulting signal.

    If the situation is more complicated, with say, a live laptop performance, with a human controlling the
    synthesis in unpredictable ways while it is going on, and signal going directly to a file, then there is a
    (physical) Performance event, but the product is not a Sound, but a Signal.

    And yes, Sound should not be a subclass of MusicalExpression.

    Cheers,

  6. Hi Everybody,

    Chris: I think that both Yves and Samer answered to all your questions. However, I played a log with the ontology today to map musicbrainz with it. I also discussed a lot with Yves about many things and some issue has been discovered (small ones) and we discussed a lot about how to make the ontology clearer and simpler to understand.

    We developed a couple of workflow, we will have to rework some definitions, re-introduce some basic properties to explicits relations. With a data dump of musicbrainz using MO and a document explaining the mapping, I think (and hope) and the revision 1.10 of the ontology will be much more easy to understand.

    Take care,

    Fred

Leave a Reply to Fred Cancel reply

Your email address will not be published. Required fields are marked *