Fractal Disambiguation for AI

To be and remain relevant, the transportation-centric ontology must be populated with credible data. Generating and maintaining confidence that the data stored is accurate (which makes it actionable) requires computational techniques that are known to yield not just a “good” degree of accuracy, but attain the highest. A key characteristic of an “accurate” datum is that it is the derivative of efficient algorithmic objective differentiation (AOD). This means the algorithm is designed to be invulnerable to computational degradation, which occurs more frequently as the encountered data sets are more complex. Uncorrected, the degradation delivers unacceptable ambiguous results, which breed systemic inaccuracy in what becomes a useless, error-infused environment. Confidence in the ontology is rendered impossible without AOD.

Algorithmically representing complex, infinite-detail data (e.g., terrain, traffic) is the domain of fractals. In Fractal Analogies for General Intelligence, professors Keith McGreggor and Ashok Goel describe how fractal representations can be used to deliver human-level results. They offer a demonstration through the Odd One Out intelligence test and conclude by describing how their fractal technique “enables percept-to-action” in a simulated environment. Leveraging fractal representations in the manner McGreggor and Goel describe could be the (arguably the best, at least momentarily) technique by which to deliver AI with AOD features. That said, it would be interesting to measure whether AOD accuracy is higher in other techniques, such as, for example, quadratic unconstrained binary optimization (QUBO). (For more on QUBO, see my discussion in Quantum Computing and Computational Law Applications.)


Update 11/26/2017: The training-time variable is among the most significant challenges in deep learning model implementations. Because it’s a significant resource drain and cost-prohibitive, deep learning implementation is currently limited to companies like Facebook, Google and Amazon. One method for alleviating this constraint involves breaking down the neural network into discrete “capsules,” yielding a “capsule network.” The desirability of this neural net format is explained in the “Dynamic Routing between Capsules” article, in which Google researchers Sara Sabour, Nicholas Frosst and Geoffrey Hinton describe a multi-tiered neural structure of capsules that uses an iterative “routing-by-agreement” mechanism. The researchers claim it can more efficiently achieve state-of-the-art performance on MNIST than a convolutional net can. Broadening the availability of deep learning model implementation and aggregating it with fractal representations can concomitantly benefit computational-law capable autonomous and semi-autonomous vehicles and aircraft, which in turn stands to boost populating the transportation centric ontology with high-level-integrity data.