The Hounds and the Content Duck
As I was pondering passive resistance mechanisms (PRM) in on-line content protection, Mahatma Gandhi’s peaceful opposition tactics popped up (like one of those pop-up ads, which amusingly now seem so retro). His strategy against the British worked because it was exercised against the British. Against the Taliban, well…a different result would have been recorded. Similarly, content owners seeking to curb/exclude bots from their site fare well with PRM, such as robots.txt, when the encounters are with friendly bots. Where the bots are like marauding hounds, they always make off unabated with the content duck.
Another tool that can be lobbed into the PRM bucket is the browsewrap agreement. In terms of legal attention it steals the show as the means du jour by which content owners seek to regulate use of their site. Yet it too is far from a slam-dunk effective mechanism. Human users are overwhelmingly ignorant of its existence, those who know about it don’t bother reading it (more on that in a second); irresponsible bot designers willfully ignore it and even friendly bots are unable to effectively interact with it, if at all. Yes, violation of the terms (in theory) gives rise to a breach of contract claim and maybe saves the day when copyright infringement falls short. But there’s too much legal turbulence around it. Sparks fly and judicial inefficiency reigns when the wholesale disregard meets the legal principle of being bound to terms of an agreement one has not read nor understands.
While AiCE can certainly play in the PRM sandbox, I think it will be much more interesting to see it in the context of counter-offensive action (COA) taken in unfriendly-bot encounters. One instance of COA could be in a configuration similar to UNTAME where offending bots are destroyed by AiCE. And to those who might object, stating that destroying bots (property?) is unacceptable and will give rise to further litigation I counter with two points: First, it is a fact that we summarily delete viruses from our computers all the time; their creators have not, to my knowledge ever cried foul and/or successfully sought legal recourse to stop it. Second, reminiscent and iterative of the Parker v. Google case, failure to design the bot so it can effectively interact with terms of use and/or AiCE equals an implied license to destroy it.
Of course, destruction of offending bots is not the only course of action available under COA. Akin to the police halt-or-I’ll-shoot command, AiCE could gradually escalate its COA response, with the destruction being the equivalent of shoot-to-kill. But while a gradual response (such as warning the bot and giving it the option to leave) may be appealing from a reasonable-response standard, its effectiveness is nearly completely dependent on whether the offending bot is capable of properly heeding instruction. This illuminates the need to standardize browsewraps, bot and site designs around the incorporation of one or more AiCE so we more effectively counterbalance to PRM.