Decreasing Dependence on Labeled Data in Deep Neural Networks and its Effect on Bias-Vulnerable and other AI Applications

Decreasing the dependence on labeled data by deep neural networks is vital for driving a higher efficiency state of operation; it helps deliver a more flexible response to changes in the operational environment and overall better problem solving capabilities.

In the recently published paper, “Deep Learning for AI” Yoshua Bengio, Geoffrey Hinton, and Yann LeCun highlight neural network architectures such as Transformers and Recurrent Independent Mechanisms as first steps in achieving this efficiency state. The authors also argue that new deep learning architectures that incorporate inductive biases can be modeled by training neural networks to “discover causal dependencies or causal variables” in ways that are similar to how young children perform these tasks.

From a legal viewpoint, if decreasing dependence on labeled data is conducive to promoting a decrease (or even elimination) of bias in bias-vulnerable AI applications, it is certainly a welcome development. Contractually requiring that developers use Transformer architectures, for instance, can be a useful method of promoting and strengthening the adoption of such applications. And this is also desirable in AI applications where bias-vulnerability is not a concern, such as is the case in connected and autonomous vehicles and the build of a transportation centric ontology, as discussed in the Fractal Disambiguation for AI and other posts.

***Postscript***

December 11, 2021: We should consider formalizing the use of dependence-decreasing neural architectures into a best practice, which seeds them as a candidate to become a legal requirement. From that point this architecture can be enforced through contract (for example, as a warranty). It can also function as meta data for XAI, enabling the XAI to report on its existence or absence in an application.