Fractal Disambiguation for AI

To be and remain relevant, the transportation-centric ontology must be populated with credible data. Generating and maintaining confidence that the data stored is accurate (which makes it actionable) requires computational techniques that are known to yield not just a “good” degree of accuracy, but attain the highest. A key characteristic of an “accurate” datum is that it is the derivative of efficient algorithmic objective differentiation (AOD). This means the algorithm is designed to be invulnerable to computational degradation, which occurs more frequently as the encountered data sets are more complex. Uncorrected, the degradation delivers unacceptable ambiguous results, which breed systemic inaccuracy in what becomes a useless, error-infused environment. Confidence in the ontology is rendered impossible without AOD.

Algorithmically representing complex, infinite-detail data (e.g., terrain, traffic) is the domain of fractals. In Fractal Analogies for General Intelligence, professors Keith McGreggor and Ashok Goel describe how fractal representations can be used to deliver human-level results. They offer a demonstration through the Odd One Out intelligence test and conclude by describing how their fractal technique “enables percept-to-action” in a simulated environment. Leveraging fractal representations in the manner McGreggor and Goel describe could be the (arguably the best, at least momentarily) technique by which to deliver AI with AOD features. That said, it would be interesting to measure whether AOD accuracy is higher in other techniques, such as, for example, quadratic unconstrained binary optimization (QUBO). (For more on QUBO, see my discussion in Quantum Computing and Computational Law Applications.)