AI engineer. Kaggle Master rank 310 from 94 000. Airbus ship detection challenge winner, deep learning architect.

What Things We’ve Learned at Machine Learning Prague 2019

2 min read

What Things We’ve Learned at Machine Learning Prague 2019 - Blog S-PRO

The S-PRO experts have visited the biggest conference about Machine Learning in Europe and we are ready to share with you how it was, what impressed us and the way AI/ML may be applied in different industries bringing the significant changes towards a huge amount of processes.

Ready to know the details? Let’s get to it!

What is Prague Machine Learning conference about?

The first talks of Saturday and Sunday were the most technically deep and loaded (apparently while the brain is still under the morning coffee).

The Stuart Armstrong speech was devoted to inverse reinforcement learning, how we can interpret human expert behavior for training reinforcement learning algorithms and what we may encounter using particular interpretation.

Tomaso Poggio from MIT in his presentation tried to answer the fundamental question of deep learning: why do neural networks work so well. Tomaso explained the importance of hierarchical local feature extraction (as in convolutional neural networks), and also why this approach allows us to avoid the curse of dimensionality.

Leland McInnes made an interesting talk about approaches to unsupervised learning in terms of topology. Leland showed how using topology to analyze manifolds in multidimensional spaces, we can solve the problems of clustering, anomaly detection, and dimensionality reduction. This approach seems very innovative and promising for such extremely important tasks of our time as unsupervised and semi-supervised learning as well as learning from small datasets.

The speech of Marc Roymen from Spotify was perhaps the most substantial and well-structured. I learned what an important role playlists play on modern streaming platforms and how they influence the musical industry. Without missing important details, Marc talked about how the Spotify recommender system works, including algorithms, infrastructure, deployment and tools that they use. I would say that this report very accurately describes what you might encounter when building a production level, high load recommender system.

I was pleasantly surprised by the fact that to make recommendations, Spotify learns embeddings of tracks based on their location in the playlist (similar to learning word embeddings based on the words positioning in the sentence). Additionally, it uses information about the track content extracted by the convolutional neural network from the spectrogram. By the way, it is worth mentioning a special thank you to the Spotify team for their excellent slide design.

Lotem Peled talked about very interesting alternative ways of collecting labeled data for training machine learning algorithms when classical methods such as searching for public labeled data and crowdsourcing cannot be applied.

Jan Zikeš and Ondřej Székely shared with us the nuances of processing satellite images with convolutional neural networks. Jan Zikeš described very interesting approaches to the visualization of multichannel images (obtained by different methods) using GANs.

Ondřej Székely spoke about the difficulties that may arise when solving problems of detection and segmentation such as detection of small or obstructed objects. He also paid particular attention to advanced preprocessing techniques such as wavelet transformation, showing how much impact hand-crafted features can have on the performance of a specific algorithm.

Pavel Kordik in his report talked about algorithms of the automatic search for optimal neural network architectures (AutoML algorithms) and the latest research in this area. I was very impressed by the work of the DARTS authors. They have described a differentiable approach to search, generating architectures comparable in performance to those that were obtained by evolutionary or reinforcement learning based approaches while requiring hundreds of times fewer resources for searching.

The last Sunday lecture was one of the most unexpected and fascinating, the speaker Luba Elliot told about very interesting examples of how modern art can develop at the intersection with machine learning. I particularly remember the concepts of makeup and hairstyles, as well as prints for scarves that could break face recognition systems.

The works of artists, abstractly depicting objects such as a starfish or a human brain, were presented, with the author’s mark that future artificial intelligence should interpret such images as effectively as humans do now. A machine-generated portrait sold at auction for $432,500 was also mentioned.

Conclusion

Machines learn increasingly fast. Artificial Intelligence along with Machine Learning makes up a real revolution in the modern technology world directly affecting our everyday lives, business industries, and overall global processes.

Every successful enterprise should keep an eye on the latest tendencies and enhance business processes using the most innovative technologies.

S-PRO continues expanding its expertise by implementing cutting-edge technologies into powerful IT solutions.

banner_AI - Blog S-PRO

Avatar
Vladislav Shmyhlo AI engineer. Kaggle Master rank 310 from 94 000. Airbus ship detection challenge winner, deep learning architect.

Get our latest business insights delivered straight to your inbox.

No spam, no fluff — you will love these emails.

By subscribing to our blog you agree to receive news and updates from S-PRO. Also, you automatically agree to our Privacy Policy.
* = required field