A lot of recent papers use different spaces than the regular Euclidean space. This trend is sometimes called geometric deep learning. There is a growing interest particularly in the domain of word embeddings and graphs.
Since geometric neural networks perform optimization in a different space, it is not possible to simply apply stochastic gradient descent.
Gromov Hyperbolicity measures the “tree-likeness” of a dataset. This metric is an indicator of how well hierarchical embeddings such as Poincaré embeddings [1] would work on a dataset. Some papers which use this metric are [2] and [3]. A Gromov Hyperbolicity of approximately zero means a high tree-likeness.
I decided to update my blog and replace minimal-mistakes by my own Jekyll theme. My goal was to increase the space for content and reduce the amount of personal information the reader sees. I took inspiration from the Bootstrap theme Clean Blog. Some of my older posts need updating, so I removed them for now.
In this post, I will implement some of the most common loss functions for image segmentation in Keras/TensorFlow. I will only consider the case of two classes (i.e. binary).
In this post, I will compare some lemmatizers for Portuguese. In order to do the comparison, I downloaded subtitles from various television programs. The sentences are written in European Portuguese (EP).