MLPよりも性能・解釈可能性が優れていると主張する構造の提案。「KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model accuracy and interpretability.」とのこと。現時点では「Currently, the biggest bottleneck of KANs lies in its slow training. KANs are usually 10x slower than MLPs, given the same number of parameters.」という記載もあるが、本当かつ広く受け入れられるのだろうか。。