View a PDF of the paper titled Independence Is Not an Issue in Neurosymbolic AI, by H{\aa}kan Karlsson Faronius and 1 other authors
Abstract:A popular approach to neurosymbolic AI is to take the output of the last layer of a neural network, e.g. a softmax activation, and pass it through a sparse computation graph encoding certain logical constraints one wishes to enforce. This induces a probability distribution over a set of random variables, which happen to be conditionally independent of each other in many commonly used neurosymbolic AI models. Such conditionally independent random variables have been deemed harmful as their presence has been observed to co-occur with a phenomenon dubbed deterministic bias, where systems learn to deterministically prefer one of the valid solutions from the solution space over the others. We provide evidence contesting this conclusion and show that the phenomenon of deterministic bias is an artifact of improperly applying neurosymbolic AI.
Submission history
From: Håkan Karlsson Faronius [view email]
[v1]
Thu, 10 Apr 2025 15:28:36 UTC (1,030 KB)
[v2]
Wed, 16 Apr 2025 10:29:19 UTC (1,030 KB)