Who Owns Digital Thoughts? The Limits of Property Law and the 2025 UNESCO Recommendation on the Ethics of Neurotechnology
The rapid advancement of Brain–Computer Interfaces (BCIs) and artificial intelligence (AI) in neurotechnology has moved beyond speculative science and clinical experimentation into commercial and regulatory relevance.[1] Advances in neural sensing and AI now permit systems capable of translating patterns of brain activity into text or other communicative outputs, and in some cases enabling users to control digital systems or physical devices through neural signals. As these technologies increasingly migrate into consumer-facing and workplace settings, they generate novel forms of data: neural signals and the probabilistic inferences derived from them.
As algorithms analyze data from neural activity to generate inferences about cognitive and affective states, a foundational legal question emerges: how should law conceptualize and regulate information that reveals, or purports to reveal, the contents of the human mind? For many years, U.S. data governance has relied heavily on notice-and-consent architectures embedded in privacy statutes and consumer protection law. While American privacy law is not reducible to a pure property regime, it often treats personal data as an object of exchange subject to disclosure and contractual allocation.[2] Whether that structure is adequate for neural data is increasingly contested.
I. The Limits of Property-Adjacent Privacy Frameworks
American privacy law—including statutes such as the California Privacy Rights Act (CPRA)—reflects a hybrid structure combining consumer protection, informational privacy, and market-based consent mechanisms.[3] Under this framework, data processing is generally permissible provided that firms disclose their practices and individuals are afforded certain forms of consumer choice, including the ability to consent to specific uses of sensitive data, opt out of data sales or sharing, and exercise statutory rights such as access or deletion. Scholars and regulators, however, have long questioned whether digital consent models function as meaningful exercises of autonomy.[4]
The concern is amplified in the neurotechnology context. Users of consumer EEG devices or neuro-adaptive systems may lack the technical capacity to understand how raw neural signals can be transformed into predictive or probabilistic inferences about emotion, attention, or preference.[5] AI systems do not merely collect neural signals; they generate inferential profiles that may have legal or economic consequences.[6] Existing privacy statutes often regulate collection and sharing, but they provide limited procedural mechanisms for contesting algorithmic inferences as such. As Brandon Garrett has argued in the broader AI context, procedural due process principles become salient when automated systems generate determinations that materially affect individuals without meaningful opportunities for explanation or challenge.[7]
A second concern relates to commodification. Conceptualizing neural data primarily as a transferable asset risks normalizing its exchange as a condition of employment, insurance, or service access. Property concepts can be analytically useful in structuring entitlements, but they may insufficiently capture the qualitative distinction between commercial data and information that reveals—or enables inference about—an individual’s mental life.[8] Where regulation implicates the architecture of cognition itself, dignity and autonomy concerns arise that are not easily reduced to market exchange models.
These critiques do not imply that privacy statutes are irrelevant. Rather, they suggest that additional normative frameworks may be required when technologies directly implicate freedom of thought and mental integrity.
II. The Human Rights and “Neurorights” Framework
In response to these concerns, legal scholars and bioethicists have proposed the development or clarification of “neurorights”—interpretations of existing human rights principles tailored to neurotechnological contexts. Marcello Ienca and Roberto Andorno have argued that traditional rights to privacy and bodily integrity may require doctrinal refinement where technologies can access or modulate neural processes.[9]
Central to this discussion is the concept of cognitive liberty, sometimes described as mental self-determination.[10] As articulated by scholars, cognitive liberty encompasses the right to control one’s mental processes and to be free from non-consensual intrusion or manipulation. It also implies that individuals should not be subjected to coercive “neuro-surveillance” or compelled disclosure of cognitive information absent compelling justification.[11]
Related principles include mental privacy and mental integrity. Mental privacy would protect individuals against unauthorized extraction or decoding of neural data.[12] Mental integrity extends established protections against physical interference to technologically mediated interventions that alter or influence cognitive states. Rather than framing the problem primarily in terms of ownership, this approach emphasizes the protection of autonomy, dignity, and freedom of thought.
At the same time, human rights framing is not self-executing. International human rights instruments often operate at a high level of abstraction and depend upon domestic implementation. Without legislative incorporation and enforcement mechanisms, rights-based language may remain aspirational.[13] The analytical question is therefore not whether to invoke human rights, but how to operationalize them within domestic legal systems and translate broadly articulated norms into locally intelligible legal and institutional practices.[14]
III. The 2025 UNESCO Recommendation: Normative Significance and Limits
In November 2025, UNESCO adopted the Recommendation on the Ethics of Neurotechnology.[15] As a Recommendation, the instrument does not create binding treaty obligations under international law. It does, however, articulate a normative framework endorsed by UNESCO member states concerning the governance of brain–computer interfaces and neural data.
The Recommendation situates neurotechnology within a human rights framework, emphasizing human dignity, freedom of thought, mental privacy, and autonomy. It calls upon states to adopt appropriate legal and regulatory measures to prevent harmful uses, including applications that facilitate coercive control, unlawful surveillance, or manipulation. It also highlights the risks associated with deploying neurotechnology in employment and commercial contexts where power asymmetries may undermine meaningful consent.
The Recommendation does not impose enforceable prohibitions. Rather, its significance lies in establishing a shared normative baseline and encouraging domestic reform. The instrument also emphasizes the importance of informed consent in the collection and use of neural data. At the same time, this emphasis highlights a tension identified earlier in the context of notice-and-consent privacy models: consent-based governance models may be insufficient where technologies generate probabilistic inferences about mental states that individuals may not fully understand or control. It reflects an emerging international consensus that neural data warrants treatment beyond ordinary consumer information.
IV. Conclusion and Policy Implications
The governance of neurotechnology raises structural questions about the adequacy of existing privacy frameworks. While U.S. consumer privacy statutes in some states provide important tools, they may not fully address technologies that generate inferences about mental states.
A defensible reform agenda would not require abandoning current statutory structures but supplementing them. Legislatures could explicitly classify neural data and derived cognitive inferences as highly sensitive information subject to heightened safeguards. Several U.S. states, including California, Colorado, Montana, and Connecticut, have already begun experimenting with this approach by classifying neural data as sensitive personal information under state privacy statutes, while no comparable federal framework currently exists.[16] They could restrict conditioning employment or essential services on the disclosure of neural information. They could also require meaningful transparency, explainability, and contestability where AI systems draw inferences about cognitive or affective states with material consequences.
The core claim is not that neural data can never be conceptualized within property or privacy frameworks. Rather, it is that legal systems should resist reducing neural information to an ordinary market commodity. Where regulation touches the integrity of mental life, doctrines of autonomy, dignity, and freedom of thought must play a central role.
References
[1] See Nita A. Farahany, The Battle for Your Brain (2023) (discussing emerging neurotechnology and its societal implications).
[2] Jane R. Bambauer, How to Get the Property Out of Privacy Law, 133 Yale L.J. F. 1087 (2024).
[3] Cheryl Saniuk-Heinig, Private Rights of Action in US Privacy Legislation, IAPP (June 10, 2024),
https://iapp.org/resources/article/private-rights-of-action-us-privacy-legislation.
[4] Lauren Henry Scholz, The Illusion of Consent: Rethinking Privacy Online, Ga. St. U. L. Rev. (2025),
https://www.gsulawreview.org/blog/the-illusion-of-consent-rethinking-privacy-online/.
[5] See Farahany, supra note 1.
[6] See Brandon L. Garrett, Artificial Intelligence and Procedural Due Process, 27 U. Pa. J. Const. L. 933 (2025).
[7] Id.
[8] Talya Deibel, Private Law and the Inner Self: Comparative Perspectives on the Governance of Neurotechnology, 14 Glob. J. Comp. L. 105 (2025).
[9] Marcello Ienca & Roberto Andorno, Towards New Human Rights in the Age of Neuroscience and Neurotechnology, 19 Life Sci., Soc’y & Pol’y 5 (2017).
[10] Jan-Christoph Bublitz, “My Mind Is Mine!?”: Cognitive Liberty as a Legal Concept, in Cognitive Enhancement 233 (Elisabeth Hildt & Andreas G. Franke eds., 2013).
[11] Council of Europe, CDBIO Report on Neurotechnologies (2021), https://rm.coe.int/round-table-report-en/1680a969ed.
[12] See, Ienca & Andorno, supra note 9.
[13] UNESCO, Recommendation on the Ethics of Neurotechnology, U.N. Doc. SHS/BIO/REC-NEURO/2025 (Nov. 2025); U.N. Human Rights Council, Report of the Special Rapporteur on the Right to Privacy, U.N. Doc. A/HRC/58/6 (2025).
[14] Sally Engle Merry, Human Rights and Gender Violence: Translating International Law into Local Justice (Univ. Chicago Press 2005) (describing the process of “vernacularization,” through which international human rights norms are translated and adapted into local legal and cultural contexts).
[15] See UNESCO, supra note 13.
[16] See Cal. Civ. Code § 1798.140 (West 2025) (classifying neural data as sensitive personal information under the CCPA, as amended by SB 1223); Colo. Rev. Stat. § 6-1-1303(4)(b) (2024) (including neural data within “biological data,” a sensitive data category under the CPA); see also Mont. Code Ann. § 50-46-102(11) (2025) (defining “neurotechnology data”); Conn. Gen. Stat. § 42-515(23) (2026) (defining neural data from central nervous system activity).