# How to find and combine ML algorithms to improve your score

In this post, I will first explain how to find the best ML algorithms for a data set using some simple math. Then I will introduce a way to combine multiple algorithms.

**Finding the right algorithms**

Let’s assume you have already prepared your data. What most people would do now is putting all the available algorithms in a list, using cross validation and then choosing the best algorithm based on some loss function. However, this method will not always give you the best results.

Some algorithms make a number of assumptions that need to be considered before evaluating the loss function. For example k-NN regression and gradient boosting can deal without any problems with multicollinearity. Features like “sum of n independent variables” can even improve your score. But this is not the case for algorithms based on linear regression. You will see your score significantly lowered if you include features that are perfectly correlated to each other.

In other cases you include a feature that works great for linear models, but really badly for gradient boosting. For example some features could have cubic growth and you include terms , . Your polynomial regression will have a lower loss, but your boosting algorithm will just become much slower. The bottom line is what works for some algorithms might not work for others.

To deal with this problem, you have to consider the features also as parameters of the individual algorithm. To make this more concrete, let’s consider the following code:

This function reads a csv file, adds some features and returns the data. The feature engineering part is here not really important, the main thing is you can call this function as follows:

And this is simply counting in binary. What we can do now is generating the cartesian product of parameters and trying out all the possibilities. This means here with . We have to try therefore possibilities.

We are of course not limited to only using zeros and ones. If we had a feature like “polynomial regression”, we could just add to the cartesian product where is the degree of the polynomial.

In code this means:

The process is then:

- call load_csv with
- do cross validation on all the algorithms
- call load_csv with with
- and so on

Finally, after trying out all the possibilities we have found the best ML algorithms for the task at hand. Because the algorithm depends on the selected features, we have to create a new class as follows:

Then the algorithms can be called like this:

**Combining the algorithms**

First, we have to decide how many ML algorithms we want to combine. Let’s say we want to choose algorithms from a total of algorithms. This means we need to consider possible combinations.

We are using here combinations with replacement, because it is possible that times the same algorithm produces better results than different algorithms.

More concretely, we can create a set that contains algorithms. Then we draw algorithms from that set, combine the results via a meta model and calculate the loss. In the end, we choose the combination with the least loss.

The first part looks like this:

Here we are considering combinations of , and . The function combinations_with_replacement gives tuples of the form , , . We can use these numbers as indices to our set i. e. , , .

We could now combine the algorithms (tuples) we have found via blending or stacking. However, these methods are not as clean and require a bit more code. This is why we will do something simpler in this post. If you are focused on winning a competition, you should probably stick to blending/stacking because these methods improve your score even more. In case you don’t know yet how stacking works, take a look here for some code: [1].

Let be still the set that contains the algorithms. Then

The result is an matrix that contains the results of the algorithms . We can select specific algorithms like this (where [1, 2], [1, 3, 4] are the tuples from before):

The next step is to find the meta model. Let be an matrix with . We have to find a vector such that .

In theory, we can just use the squared norm as loss . Followed by finding the minimum via the first matrix derivative, we get . But this would just be linear regression.

Let us try something different and add some constraints to . Assume and . We can now use either some kind of mathematical optimization or try out all the possibilities to find . For the sake of simplicity, I chose the latter method.

In the first section, we used the cartesian product . This time our base is much higher, so we have to be careful about the number of possibilities. At least possibilities should still be feasible on a modern computer. If we are sure that the contributions of the individual algorithms are never higher than , then we can set to a maximum of instead of .

Let’s look at some code:

In order to get a more accurate result, I set .

So we can see that our meta model calculates just the weighted average of algorithms.

**References**

[1] https://github.com/log0/vertebral/blob/master/stacked_generalization.py

## Comments