Classification of Functions Using Machine Learning
Martin Lukac, Aigerim Yessenbayeva, Michael Lewis and Krzysztof Podlaski
We study the approximation of function classification by machine learning. We first determine all the NPN and NPNP classes of three-bit reversible and one to five-bits irreversible functions. Then we apply machine learning to learn the labels for each set of functions. We then search the parameter space for several optimization criteria, such as the size of the neural network, the architecture, and learning parameters that would allow the network under training to learn the NPN classification problem with up to 100% accuracy. We also determine if, for n-bit functions, a classification can be performed using two smaller networks, each classifying n – 1-bit functions. Additionally, we discuss the logic solvability of all four-bit functions using the game of Nonograms. We determine a relation between the learnability of the classification and logic solvability. We determined that while there is a set of functions that can be classified by representing a function as a product of two subfunctions, a more complex model is needed to classify a n-bit NPN functions as a function of n – 1-bit NPN functions. In addition, we determine that approximate classification is always possible, with certain NPN classes being perfectly classified while others are not. Moreover, we can observe a direct relationship between the learnability of certain classes and the speed of finding a solution for these classes when used in a logic game. Finally, we show that the approximation of the classification using machine learning can replace standard database-based classification. The advantages of machine learning methods are end-to-end learning, the lack of circuit design requirements for classification, and the ability to easily learn the class distinction problem.
Keywords: Logic functions, machine learning