All of us participate in massive, complex processes that dissect and analyze every instance of our shopping, spending, friending, gaming, movie watching, phobias, dating, dieting and countless other activities. All of this is going to get even bigger as the range of ubiquitous devices (basically data collection centers) grows (e.g., wearable devices).
This endless supply of valuable data (currently “only” at a petabyte levels), keeps corporate analytic algorithms busy, churning away to reveal more and more about us. Facts that contain more than we would likely ever want others to know; strangers, friends and family alike. This data is not merely a reflection of who we are. It’s our data. We are its creators and owners. It’s very personal and very valuable data (companies are spending billions collecting and leveraging it). And yet we virtually have no clue as to when, how, where and how frequently the companies we hand it over to are going to make use of it. We may believe they will do only “good” things with our personal data, but at the end of the day we really just don’t know.
It has become a tired cliché to declare that our personal data is important to us; it’s a vacuous statement, one that fuels shallow legislation and meaningless, impractical rhetoric. If the level of importance we ascribe to our personal data can be gleaned from the amount of care we actually show for it, the reality is that we don’t care enough. There are, of course, nuances to this observation and the various explanations have already been exhaustively examined and documented by others (I have written about the some of this phenomena here). Numerous proposals for how to promote more control over personal data have been advanced. None have proven all that effective.
This post (the first in a series) introduces a new, arguably disruptive variable to the personal data control discussion. It proposes a new technological platform, one that is based on old legal principles, a platform by which we can gain needed control over our personal data. I call it “myDRM.”
As its name implies, the concept of myDRM is based on digital rights management (DRM). It takes what has been an exclusively commercial tool and morphs it into an additional dimension, a personal one. Referring to myDRM as a “platform” is specifically intended to denote that it has numerous possible iterations and implementations, all of which are poised to solve a variety of legal challenges. As I will discuss in upcoming posts, myDRM could tackle, for example, the challenge of obtaining meaningful user consent, enabling practical data portability, promoting richer user engagement, diluting surveillance, promoting operational transparency, perhaps even enable (to a data-specific extent) a right-to-be-forgotten feature.
The myDRM platform is envisioned to be based on an open source model and at this introductory stage I will offer a glimpse into three features. First, in terms of scope, data that is subject to myDRM is referred to as a “data packet.” This simply means that the user has the option of selecting which data will be tagged with myDRM, and which will be released without any restriction.
A second feature calls for AI functionality. This capability would allow myDRM, for example, to control data packet disposition (prevent its sale, improper disclosure, transfer to certain jurisdictions), engage in transactions (negotiate the unlocking of some or all restrictions for a fee (micropayments)). None of these AI operations would necessarily be static; they could be ontological and adapt in real time to changing events.
The third feature focuses on myDRM ease of use. One proposal is to enable access to it on a device-level setting (in contrast to an app one). This approach is intended to help ensure consistent implementation of the DRM features across all applications residing on the device. If instead we had to select DRM settings for each application, it’s likely that the ease-of-use would become too diluted, jeopardizing myDRM’s utility.
The final introductory point is a normative one: Aside from its technological capabilities, myDRM requires legal protection to be effective. Fortunately, we are not dealing with having to come up with an entirely new legal framework; the model we need for this is already pretty much in place. Take for instance, the Digital Millenium Copyright Act (DMCA). The protection it affords can be similarly used to encompass the content protected by myDRM. Only this time it is the companies that receive our data that will be legally restricted from decrypting and disabling it.
Update 4-2-19: A read of Scott McNealy’s privacy op-ed in USA Today makes a strong case for myDRM. It also makes the case for adopting a corporate veil-like privacy model built around an AI engine. I described this model in a paper titled “Application of an Autonomous Intelligent Cyber Entity as a Veiled Identity Agent,” and presented it at the 2010 Association for Advancement of Artificial Intelligence Symposium. Combined, the myDRM and AI-powered privacy paradigms can serve as efficient privacy shields that do not depend on corporate goodwill and legislative efforts.
Update 8-30-2018: Legislation will drive adoption of myDRM. Reviewing the broad privacy rights afforded under GDPR, the pending CCPA (and the various future iterations of these laws) and all of the obligations that are imposed on businesses make the case for (a not-too-distant) future rollout of myDRM. Such a rollout would be beneficial not only for consumers but also for businesses needing an efficient compliance platform and a way to stand apart from the competition.
Update 6-23-2018: Building a digital rights management application on a permissioned blockchain is a useful method to promote more control over personal data across the constituency spectrum (data processor-subprocessor- end-user). Implemented correctly, this type of application serves to promote compliance with various privacy laws (such as GDPR and HIPAA), reduces operational risk and enhances the organization’s information management reputation.
Update 10-8-2017: A couple of updates to this post. First, on the topic of the Equifax breach: imagine, for a moment, that all the data Equifax stored was in the myDRM format. As you read this platform’s 3 features, it becomes clear that the market value of the exfiltrated data in such a breach would be virtually zero. There’s also an interesting law-enforcement application/capability that comes from the “AI functionality.” Since this feature behaves as a dynamic data transactions broker, it could be designed to also report any tampering attempt, which report could include, for instance, the tamperer’s identity, location, etc. The second update relates to my thoughts around hack proofing myDRM, and this is divided into two sub parts: (a) enabling/integrating the AI feature with a blockchain app interface, which serves to guarantee inter-party trust through immutable integrity and (b) ultimately replacing blockchain with a myDRM that is compatible with a quantum entanglement environment.