Minds Mastering Machines (m3) 2019 Recap - Day 2

Welcome to my summary of the second day of minds mastering machines 2019. If you have not read my article about the first day you can do it here. Looking back to day two I can say that I focused a lot on topics around data science without going into the nitty-gritty details. So, without further ado: let’s start with the law.

Day 2 - Thursday

Legal pitfalls when using AI and ML

A very different view on the whole AI world was presented in this talk by a lawyer. Note that we are talking about german law here and as you might know this topic is a highly controversial one in Germany. The talk raised a lot of questions without any answers. And this is a good thing because there aren’t any answers yet. We, I mean the whole Data Science community, have to think about the reglementations and laws around our powerful algorithms. He talked about three topics:

  • Contract law

    • Is it possible that a system agrees to a contract

    • let’s say you order a second seat for a person on a flight and you set the name to" “to be determined”

      • if the system accepts this order: can this contract be challenged by the airline?

      • if there is a highly engineered machine learning algorithm in place that accepts this order: can this contract be challenged?

  • Responsibility

    • if machine learning algorithms or artificial intelligence make errors. who is at fault? where is the line? what if there are plenty of people involved in the creation of such a system? think about autonomous cars here

  • Data protection

    • since the gdpr went “live” last year it was a hot topic all over europe and we talked about it here, too. However the first two parts inspired me more

This talk was held by Fritz Pieper. He encouraged the law and tech communities to get together more. Together with other people from the law community he created Telemedicus a non profit that is dedicated to discuss law questions around information technology. I missed the talks about Decision Boundaries and Use-Cases for Long Short Term Memory Networks.

Modern methods in text mining

Representing words as vectors with the nice property that you can calculate with them: king + man - queen = ?

Representing words as vectors with the nice property that you can calculate with them: king + man - queen = ?

Now it was time for me to get up to date on text mining. The following talk was a summary about the state-of-the-art methods in analyzing text. I learned that a lot of folks in the text mining community are big fans of sesame street since two of the top methods in are called ELMo and BERT. I have not done much with text mining over the last year so it was great to catch up.

In the 45 minutes I learned about the following techniques:

  • Word Embeddings: find similarity between words

    • Word2Vec

    • FastText (only briefly mentioned)

    • GloVe (only briefly mentioned)

  • Topic Modeling: find topics in documents (like clustering)

    • you essentially look at the matrix of documents vs words and re-arrange the rows and columns to form clusters

  • ElMo: Contexttualized embeddings

    • combine the context of the word with the embeddings

  • BERT: Transfer learning

    • use of pre-trained models (this technique is widely used in computer vision and finally arrived in text mining as well)

    • very cool application: question answering

      • thorw a large pile of documents (like wikipedia books) into a pre-trained base model and let it answer questions about them

      • it works surprisinly well


This talk was held by Christian Winkler and Jens Albrecht. I missed the talks Reinforcement Learning Introduction and Use machine learning to analyze virtual car crashes.

Intelligent Mobility and Logistics

The next talk I attended was about a project at the Deutsche Bahn which spawned emotional discussions after. A lot of people in Germany are angered or annoyed by the whole train system because of delays or cancellations. It should come as no surprise that the DB is actively working on solving this issues because it is in their interest to have satisfied customers. This talk was about a prototype that is currently in the works to decrease delays by re-arranging or cancelling the right trains to make the rest of the system work smoothly. They chose the main station in Stuttgart for their research (that in itself is kind of a funny choice).

They are building a system based on reinforcement learning that should be activated if an event occurs. This event could be a tree hitting the tracks, a problem with a train, unusual high passenger numbers and so on. From that point on the system tries to take the right actions to get the system back to a healthy state. In general that is a very straight-forward approach and it let to good results in their simulations. If you are interested in reinforcement learning I can recommend you gym which is a great platform to test out these algorithms using old arcade games.

What I really liked was that we got real insights into the development of an ai prototype. The talk was very project heavy which was a nice change compared to the other tech heavy talks.

This talk was held by Thomas Thiele. I missed the talks Questions Answering with NLP and Bring your own model with tensorflow serving.

Keynote: Style Transfer - How neural nets create art

Now it was time for the second keynote of the conference which was much more lighthearted compared to the first one. It was basically a show case about using neural networks in augmented reality to transfer the style of certain art pieces onto the scene that the user sees. It started with a beginner-friendly introduction to the concept of neural networks and style transfer.

Take the content of the first image and the style of the second image.

Take the content of the first image and the style of the second image.

Style transfer works similarly to standard neural networks. You use gradient descent to minimize the error defined by a loss function. In this case the loss function takes two things into account. How accurate the shapes defined by the content source are represented in the output (these shapes have to be at the same location in the picture) and if style elements of the style source are present (here the location is irrelevant).

Content loss

Style loss

I think you can get a great feel for this technology by looking at this video:

In their project they combined this technology with a virtual reality headset that had cameras mounted in front. This gave them the opportunity to walk around and see a style transfered world. It was a very light-hearted, funny and interesting keynote. I think that it should motivate us to try these new things out even if there is no apparent use for it. But on your way your team will learn a lot about technologies that will be important in the next years.

This keynote was held by Thomas Endres and Martin Förtsch.

AI Ethics, Security & Safety - Human and Machine: Who protects who?

After the keynote I chose to listen to a talk about ai ethics, security and safety. It was held by a security consultant and touched on many points that were mentioned in the talk about law earlier. For me it is really interesting what other fields have to say about AI and what they expect from us insiders.

He first showed us a small matrix that showed various attack threats that appear if humans, ai and machines meet.

Man vs AI vs Machine matrix

Man vs AI vs Machine matrix

Humans will be threatening AI systems (just remember the part about Ai security from day 1) by trying to break them. On the other hand we have to make sure that AI system won’t be threatening humans or groups. An AI can easily develop a bias against certain groups of people. I think the most famous example is Amazons attempt to use AI in their recruiting process. The singularity is also a topic that should not be ignored how far away it may be. We were told to treat it as a scale and not a single event. We are moving towards it and we better start preparing.

At the end he mentioned the new (April 2019) EU guidelines for trustworthy AI which I think are worth sharing.

EUGuidelines „ Ethics guidelines for trustworthy AI”

EUGuidelines „ Ethics guidelines for trustworthy AI”

Overall it was a nice overview over the current view of information security experts on artificial intelligence. This talk was held by David Fuhr. I missed the talks How to evaluate the quality of my machine learning model and Deep Learning with small data.

From Prototyping to Production - Portability of neural networks

The last two talks were thematically very close. I was very interested in learning how to tranfer a neural network from one framework to the next.Or (and that is exactly what I want to know more about) from your local python notebook to production.


There are corresponding blog articles available and therefor I want talk in too much detail about everything that was shown. He showed us three approaches which you can see in the image below.

You can find more information about each approach in the corresponding blog articles

You can find more information about each approach in the corresponding blog articles

What I take away from this talk is that ONNX seems to become the standard for model translation. And this without the native support by tensorflow.

This talk was held by Marcel Kurovski. Please find the comprehensive blog article from him here. I missed the talks Performance and scalability with optimized AI libraries and Tool breakage prediction in realtime and Workflow with Jupyter Lab.

Commissioning of your models on the iPhone using ONNX

The final talk of the conference was about the same topics as the previous talk: model translation. This time is was about putting your model onto an iPhone. This means using ONNX to translate a pytorch model to CoreML. The talk was very detailed and in-depth and went step by step through the whole process. If you want to know how you can put your model on the iPhone I recommend the blog posts by Nico. You can find them here:

This talk was held by Nico Axtmann. I missed the talks Monitoring of model performance and Tool breakage prediction in realtime.


Last year I said that the m3 conference was a hit and I think that this year was on par. However I think that the conference has to decide in the upcoming years which group it wants to target. I heard from some people that the talks were not deep enough. I think that they were perfectly fine. It is not the job of a 45 minute talk to go through the source code of a neural net. There are enough examples for that on the internet. I really liked the talks that focused on moving models to production, ethics & law issues with ai and insights into real world projects. I hope that these talks will still be part next year. All in all I think that the community matured a bit and a lot more people have real use cases to work on compared to last year. Machine learning seems to slowly arrive in more and more companies.

Everything around the talks was as great as I expected. A very good location and catering, friendly atmosphere and good food. You can find some pictures in my article about day one. There will be a m3 in 2020 and I encourage you to go there. It is really good and you can learn a lot.

One more thing …

In the first article I mentioned that I won something in the raffle and that is true. Every visitor could participate in several raffles by the organizers or the companies that had booths. I happened to win a mini drone and therefor have one additional hobby now. I haven’t tested it yet but I will soon.

My new hobby?

My new hobby?