Recording of Doctoral Defense

I am very happy to announce that I’ve successfully completed my doctoral studies and defended my thesis on the 4th of July 2019. I’m now officially Dr. techn. Alexander Pacha.

For my defense, I prepared a 45-minutes presentation. Unfortunately, not everyone who might be interested in it was able to physically join the event, so I decided to re-record the presentation and uploaded the video to YouTube. I hope you enjoy watching it, as it summarized all of the work that I’ve been doing in the last two-and-a-half years in less than an hour:

The icon used as featured image was made by Icon Pond from www.flaticon.com and is licensed under CC 3.0 BY license.

From an infant to a toddler – how my computer learnt to detect music symbols

In my last post on optical music recognition, my computer was an infant. It was merely able to utter words if presented with an isolated image of a single object. It could distinguish a quarter rest from a g-clef, similar to a baby that can distinguish an apple from a banana. Nothing more. But, it learnt all that by itself (!), only given a couple of thousand annotated examples and a powerful PC.

In this post, I am proud to announce, that my machine has grown up and is now a toddler. It still can’t run or talk. But it can crawl around and find all apples and bananas in the room. It is now capable of detecting music symbols in the music scores like this:

crop_partially_detected.png

It’s hard to see anything, right? Well, that’s because there are a lot of things going on in music scores. But let’s take it step by step.

If you haven’t read my previous post or following my research, here is the TL;DR: I am teaching my computer to read music scores to do such cool stuff as listening to scores, after taking a picture of them with your smartphone. But while the idea is not new at all, I am following a radically new approach and instead of telling (programming) the computer what to do (first look for lines, then for round objects, then combine them, …), I am letting the computer figure that out by itself. The whole thing is a five-step process, described here, of which I am currently working on step number three: detecting music objects.

We are using deep learning techniques, where a big (convolutional) neural network (think of it as something, that tries to imitate how the brain works) is being shown thousands of images like the following one, along with the information, what is in there and watch what happens over time:

individualImage3-animated

While it starts with only a limited number of simple symbols, that are being detected, it quickly figures out how to detect all sorts of symbols. And while it is clearly not perfect, e.g. the half rest is confused with a whole rest in the above stave, it is remarkable, what the network was capable of learning by itself. If you are interested in the technical details, they are described in a scientific paper, that I will present at this year’s International Workshop on Document Analysis Systems in Vienna. But to summarize it: The computer scans the image along a grid with predefined “boxes” and evaluates each box, on whether it contains something of interest or not, called “objectness score”. Once, it found all (or most) potential boxes, it starts to refine them and assign a label to each box, e.g. “I think that is a G-clef in the left upper corner at position X, Y”. This process is end-to-end trainable with the so-called Faster R-CNN approach and was implemented with Tensorflow’s Object Detection API.

Did you notice, that the scores here were handwritten? That’s right. We are working on the CVC-MUSCIMA dataset that contains 1000 handwritten music scores, which was annotated in the MUSCIMA++ dataset by my outstanding colleague Jan Hajič, whose effort I am very grateful for.

And while this looks nice already, it has one significant drawback: The images you see above are not the entire score. What happens to a symbol that is bigger than image section? Well, it is not detected at the moment. But I wouldn’t mention it here if that was the end of the line. In fact, I’ve been working on detecting symbols in the full image already, and the first results are very promising:

individualImage-78628.png

It still has a couple of other issues, for example some very small objects, that were not properly detected or some confusions, but it is already a big step towards making this work in such a way, that the computer will be able to improve over time, by simply providing more data.

I will continue to work on the object detection a little longer before I start teaching the computer, what those symbols actually mean. Once I taught him how to find the symbols, how they relate to each other and what they mean, I hope it will grow up and sing beautiful songs to me. Until then, I will have to sing them by myself. 🙂

Update 27.04.2018: If you are interested, here is my presentation on this subject, that I gave at the DAS 2018.

Aligning images – an engineer’s solution

Recently I was struggling with the fact that one of the datasets, that I was working with had the same images, but they were not correctly aligned. Since one of them had location annotations that I used for training a Music Object Detector, I had to align them somehow.

For getting an impression on the alignment-error, look at the following images:

The top-left image is the binarized image which serves as a reference. The top-right image is the original gray-scale image that is misaligned a tiny little bit, which you can’t even notice from just looking at them. So I’ve generated the bit-wise difference between the two images which is shown at the bottom and there you can almost read the full scores because they are slightly shifted and misplaced.

Generating such a diff-image from two images in Python with the Pillow library basically boils down to:

from PIL import Image, ImageChops
diff_image = ImageChops.difference(Image.open(image1_path), Image.open(image2_path))
diff_image.save(output_path)

allowing me to visually verify whether or not the images were aligned correctly.

Turns out, that almost every image in the dataset was transformed a little bit. Since the dataset contains 1000 images in multiple flavors, I needed some automation. As you can notice, the images are not very far apart from each other. So upon searching for a clever solution, I found a nice blog entry which attempts to align color channels of images, that are slightly misaligned, by applying an iterative algorithm to find an affine transformation (which is generally a very hard task). Luckily, that algorithm is readily implemented in OpenCV and is called cv2.findTransformECC. Using it is almost newbie-friendly:

from cv2 import cv2, countNonZero, cvtColor

im1 = cv2.imread(path_to_desired_image)
im2 = cv2.imread(path_to_image_to_warp)

warp_mode = cv2.MOTION_AFFINE
warp_matrix = np.eye(2, 3, dtype=np.float32)

# Specify the number of iterations.
number_of_iterations = 100

# Specify the threshold of the increment in the correlation 
# coefficient between two iterations
termination_eps = 1e-7

criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 
            number_of_iterations, termination_eps)

# Run the ECC algorithm. The results are stored in warp_matrix.
(cc, warp_matrix) = cv2.findTransformECC(im1, im2, warp_matrix, 
                                         warp_mode, criteria)

Lastly, one “only” needs to warp the image with the found affine transformation:

# Get the target size from the desired image
target_shape = im1.shape

aligned_image = cv2.warpAffine(
                          unaligned_image, 
                          warp_matrix, 
                          (target_shape[1], target_shape[0]), 
                          flags=cv2.INTER_LINEAR + cv2.WARP_INVERSE_MAP,
                          borderMode=cv2.BORDER_CONSTANT, 
                          borderValue=0)

cv2.imwrite(destination_path, aligned_image)

The final result is remarkable. Can you still see the difference?

Diff of aligned images
Bitwise difference between the aligned images. A black pixel appears, where the images are not the same. Since the images are aligned, the image is almost completely white.

Just a few pixels remain, and these are because of errors during binarization of the image, which necessarily is a lossy operation. A cool side-effect is that the images are now not only aligned but also have the same size.

The only things, I needed to tweak a little bit where the two parameters number_of_iterations and termination_eps. Both are required for the cv2.findTransformECC algorithm and specify the maximum time that it tries to find a solution and the required quality before stopping. When either is satisfied, the algorithm stops and returns the found solution. Letting the algorithm run for a few hours, yielded a perfectly aligned the dataset, which allows me now to go back to train my networks to detect musical objects.

If you are interested in the full source-code, you can find it in this Github repository.

The score images depicted in this article are from the CVC-MUSCIMA dataset by Alicia Fornés, Anjan Dutta, Albert Gordo, and Josep Lladós, licensed under CC BY-NC-SA 4.0. More information on the dataset can also be found here as well as in their original paper.

 

A fresh start in Optical Music Recognition

It’s been long – too long to be honest – since I’ve posted an updated about the SightPlayer project and I deeply apologize for it.

But there is light at the end of the tunnel: I’ve started my research at the TU Wien a few months ago, trying to figure out how to make SightPlayer real. But wait a second. Wasn’t the app almost done? Didn’t say the homepage: “Coming soon”? Yes it was and yes it did. But our team fell into the same pitfall, like many researchers and students in the past: We underestimated the difficulties of Optical Music Recognition. So I took a step back and revisited the algorithms and tools that we used to build SightPlayer and decided to take a completely new approach.

But what was wrong with it? It looked quite nice. What exactly are you trying to solve?

To sum it up: The goal is to take an image of music scores, let the computer or smartphone detect it and then play it back to you. Research in this direction has been conducted since 1966! When we launched SightPlayer, it was the first project that attempted to achieve this entirely on the Smartphone. There are two commercial applications that attempt the same thing and work similarly bad in the wild. In retrospective, its good we did not release it, or we would have gotten the same bad feedback, but would have spoiled the name.

When you are familiar with Computer Vision, then the problem statement and the approach seems kind of obvious: Detect the staff-lines, remove them, do some template matching to detect smaller symbols and finally restore the information that is required to play it back. But the bitter truth is: It is not that easy. There are many subtleties that make a huge difference when using the system on a real dataset. Take a look at these notes, and tell me how the templates should look like:

HOMUS_Samples.png

Well, these symbols look fairly normal, although they are handwritten. I guess by adding some templates, I will catch them. But what to do when they are put in context or look a little different?

HOMUS_Samples_With_Staff_Line.png

You will need a lot of templates. So, some researchers try to find the staff-lines and then remove them, which is the first step where I disagree with most researchers: Why discard information, that guides our reading and is required to make sense of the actual notes? I claim, that removing the staff-lines is not required, if the right approach is used.

But what is the right approach, though?

My hopes lie in a new technology, called Deep Learning. Check it out, if you have never hear about it before. Basically it’s a really clever way of doing machine learning, where you can perform supervised learning very easily – you more or less just provide the data and the expected output – and let the machine figure out the rest by itself. In practice, it’s a little bit more challenging, but you get the idea.

So far, I had great success with classifying handwritten music symbols and entire images of music scores. Check this out:

MusicScoreClassifier-Screenshots_3.png

This handsome Android application can distinguish music scores from arbitrary content in real-time! And also the classifier for handwritten music symbols works quite well – actually it performs even better than humans. It only errs with symbols like these:

HOMUS_Misclassifications_machine.png

But to be honest, you need some fantasy to guess the right classes of music symbols here (they are Sixteenth-Rest, 2-4-Time, Sixteenth-Note, Cut-Time, Quarter-Rest and Sixteenth-Note).

The next step is to locate music symbols in an entire sheet of music scores. I hope that it will work out as well as the first few experiments.

From now on, I will keep you updated more regularly on my research progress. I promise!