AiCE & Environmental Learning

Leaping beyond mere spam filter status, Gmail’s recently announced Priority Inbox (PI) offers a big and equally important promise: Salvation from the deluge of low priority email. How? It will learn. Under the hood: An AI engine that learns to extract various low priority emails from those deserving attention. It monitors which emails the user typically responds to, builds rules from that behavioral stream and sorts inbound traffic accordingly.

PI joins its other Google AI siblings (Prediction API and Big Query) in the latest lineup of nascent AI applications embedding into mainstream use, giving rise to an interesting, encouraging pattern. In my AiCE presentation at Stanford Law School this last January, I envisioned that top tech companies (specifically identifying Google and Apple) are the natural candidates for AiCE first-implementer status. Months later it’s these very same companies that are taking important steps in AI integration; the stage for AiCE implementation (slowly) crystallizes.

Learning a user’s preferences is not unique to PI. This has long been the promise for AI applications and is a critical feature. It strengthens the application’s resiliency with respect to unprogrammed contingencies, in the absence of which it would be of no use or crash.

Befitting its AI DNA, AiCE is capable of learning, with the what-and-how depending on the designer’s or end-user’s chosen configuration. For instance, the AiCE Veiled Identity Agent (AVIA), which was presented at the AAAI Spring 2010 Symposium at Stanford, will learn it’s user’s privacy preferences and will be able to accommodate a wide range of personally-tailored interactions that would otherwise be impossible.

But designers and end-users are not the exclusive learning sources; i.e., they are not necessarily the sole input mechanism for the AI instance. In some AiCE iterations, the operational environment could itself function as an input mechanism.

One illustration of environmental-based learning comes from the AiCE browsewrap protection configuration. Considering AiCE’s autonomous behavior and its projected capabilities vis-à-vis passive resistance mechanisms (PRM) and counter-offensive action (COA), including dynamic escalation, its learning experience is fueled and influenced by the number of interactions it has. So while even though the AiCE on-duty has a core knowledge element planted and gleaned from the designer and/or end-user, the environmental-based learning opportunities are numerous and operationally valuable.

Another useful illustration comes from UNTAME, albeit not an AiCE, but relevant nonetheless. Those of you who read the April 2010 blog post on this subject (and those of you who are just catching up to it now) will likely appreciate the significance of UNTAME’s “hive” habitat, which offers a cooperative and regenerative framework. As it comes to learning, these 2 qualities are significant: Every UNTAME instance can learn from another’s experiences gathered while protecting the computer network; knowledge is passed from one UNTAME to the other, with regeneration occurring where any UNTAME instance is lost, protecting the compounded learning experiences garnered so far.

As PI and other similar AI implementations gather steam, it will be interesting to see how open they are designed to be; whether they are capable of valuable environmental-based learning or confined to an operational cocoon, gathering input from the end-user alone.