For a while now, we have seen the trend that neural networks are vastly popular, and a large portion of the machine learning research is dedicated to achieving minor gains in accuracy at huge power costs. We hypothesize that, given the same love and care (in terms of nifty pre-processing strategies etc.), traditional machine learning methods have the potential to achieve a similar accuracy while consuming less power. A few questions we are interested in are the following:
- Using the same pre-processing techniques, can traditional techniques achieve a similar performance? For which type of dataset does it work?
- There is a trade-off between the accuracy and the number of parameters or layers (as a proxy for the power consumption), and we can expect the last bit of accuracy to be the costliest. Can we find a more sustainable way to stop at a point where we sacrifice a little accuracy to save power?
- If we compare traditional ML methods to NNs while allowing the same number of parameters, what do we observe?
- There is a myth that only NNs can perform well on certain types of data (such as images). Can we transfer the special tricks NNs use on this data type to traditional ML methods?
Recommended skills: Basic knowledge of machine learning and python