Fractal Disambiguation for AI

To be and remain relevant, the transportation-centric ontology must be populated with credible data. Generating and maintaining confidence that the data stored is accurate (which makes it actionable) requires computational techniques that are known to yield not just a “good” degree of accuracy, but attain the highest. A key characteristic of an “accurate” datum is that it is the derivative of efficient algorithmic objective differentiation (AOD). This means the algorithm is designed to be invulnerable to computational degradation, which occurs more frequently as the encountered data sets are more complex. Uncorrected, the degradation delivers unacceptable ambiguous results, which breed systemic inaccuracy in what becomes a useless, error-infused environment. Confidence in the ontology is rendered impossible without AOD.

Algorithmically representing complex, infinite-detail data (e.g., terrain, traffic) is the domain of fractals. In Fractal Analogies for General Intelligence, professors Keith McGreggor and Ashok Goel describe how fractal representations can be used to deliver human-level results. They offer a demonstration through the Odd One Out intelligence test and conclude by describing how their fractal technique “enables percept-to-action” in a simulated environment. Leveraging fractal representations in the manner McGreggor and Goel describe could be the (arguably the best, at least momentarily) technique by which to deliver AI with AOD features. That said, it would be interesting to measure whether AOD accuracy is higher in other techniques, such as, for example, quadratic unconstrained binary optimization (QUBO). (For more on QUBO, see my discussion in Quantum Computing and Computational Law Applications.)

***Postscript***

Update 5/6/2021: The U.S. Department of Energy’s Oak Ridge National Laboratory’s Multinode Evolutionary Neural Networks for Deep Learning (MENNDL) was recently licensed to General Motors and will be used in the development of its driver assistance systems. MENNDL powers rapid neural network evaluation which enables the selection of the most optimal neural network design for any given application. Enabling fractal representation in neural network design seems like an effective combination, one that could significantly enhance the credible quality of the data in the ontology.

Update 10/3/2020: When it comes to ontologies, there is a strong, cyclical relationship between data credibility and effective dataset training. The term “effective training” correlates with whether the machine learning application is capable of employing it to yield useful actions. The term “useful” is mission-specific, meaning that it tends to support, propagate, and bolster the dictated mission’s actions. With this, it becomes evident that legal and technical regimes that enforce and promote credible data in any ontology can be expected to also serve effective dataset training.

Update 5/15/2020: Miniaturizing an optical neural network and supercharging it so it is capable of analyzing data in a “few nanoseconds” should be useful for also speeding up fractal driven percept-to-action (discussed above). For more on this, check out the MIT Technology Review story on the research conducted at the Institute of Photonics in Vienna, Austria. The neural network they built is comprised of light-sensing diodes built on a “few atoms thick” tungsten diselenide sheet. An increase in the reliability of percept-to-action can decrease errors and, consequently, damage and liability.

Update 7/23/2019: ImageNet-A is a hand-curated collection of images that are deemed capable of confusing AI; i.e., they feature a degree of complexity that triggers computational degradation. Curiously, the research paper, Natural Adversarial Examples does not discuss employing fractal representation, and it would be interesting to see to what degree degradation could be countered by using this technique.

Update 1/23/2018: A key challenge in producing credible data for the transportation-centric ontology is consistently reducing as much ambiguity as possible. McGreggor and Ashok’s fractal representations is one method by which to accomplish this, by helping the autonomous agent manage encountered ambiguity. But this is not enough. Observing that ambiguity is also degraded with time establishes the principle that as more time passes, the larger the data sets an autonomous agent is exposed to and from which it can learn. Overall, this helps increase the agent’s operational efficiency because the sum of the learning process helps reduce the pool of potential future ambiguity. The problem is that this learning can take a long time to achieve, which also makes it a resource intensive process and narrows the scope of possible players to companies like Facebook, Google and Amazon. To solve it is necessary to accelerate the time-training variable through leverage of efficiency-driving techniques such as capsule networks. Together, these two techniques, and others of similar ilk, can drive the effort to eradicate ambiguity and help build highly-credible data ontologies.

Update 11/26/2017: The training-time variable is among the most significant challenges in deep learning model implementations. Because it’s a significant resource drain and cost-prohibitive, deep learning implementation is currently limited to companies like Facebook, Google and Amazon. One method for alleviating this constraint involves breaking down the neural network into discrete “capsules,” yielding a “capsule network.” The desirability of this neural net format is explained in the “Dynamic Routing between Capsules” article, in which Google researchers Sara Sabour, Nicholas Frosst and Geoffrey Hinton describe a multi-tiered neural structure of capsules that uses an iterative “routing-by-agreement” mechanism. The researchers claim it can more efficiently achieve state-of-the-art performance on MNIST than a convolutional net can. Broadening the availability of deep learning model implementation and aggregating it with fractal representations can concomitantly benefit computational-law capable autonomous and semi-autonomous vehicles and aircraft, which in turn stands to boost populating the transportation centric ontology with high-level-integrity data.