Orwell’s Last Fortress of Freedom: Human Brain in the Age of Neurotechnology

Earlier this year, the Constitutional Law Center hosted NYU’s Barry Friedman who discussed how government authorities are using artificial intelligence to make sense of the vast amount of data collected on all of us, presenting intimate pictures of our lives. Professor Friedman credited George Orwell with foreseeing this in his dystopian novel, 1984.

But Orwell was prophetic in an even more fundamental way. He predicted that one day science will be able to read data accumulated in human brains, breaking down the walls of mental privacy and penetrating through the last bastion of human freedom: our thoughts and feelings. This day is here. As Professor Nita Farahany describes in her latest book, The Battle for Your Brain, the ability to read thoughts, beyond reach in Orwell’s time, is gradually becoming a reality, thanks to the field of neuroscience. This can be a force for good. For example, access to a person’s brain data—acquired with consent—can help a disabled individual to control his body with his thoughts, or can make a very early prediction about a neurological disease. But there are also serious downsides. “Once we become aware that others can access what we are thinking, feeling, or imagining, we may attempt to censor even our thoughts, lest we be ridiculed or ostracized for having ideas that go against the grain,” writes Professor Farahany. “Worse still, if governments gain power to track the contents of our brains, they can arrest us and punish us for thought crimes.”

Transcript

So welcome everyone to tonight’s constitutional conversation. This is one that I personally have been especially looking forward to because it is a subject which is so interesting and that I know almost nothing about. That’s and so I’d like to welcome first professor Nita Farahany from duke Law School and her interlocutor, whom I think most people here probably already know.

My colleague, professor Hank, really here at Stanford, is going to be giving some comments on Nita’s presentation. Her presentation is based upon her new book, which will be available to, for, sale, and I don’t know if you’re doing any autographing, but maybe even some autographing.

Afterwards, we’re going to be having a reception to which everyone is invited out in the front, the courtyard on the front of the law school after the after the talks. Please join us there and you’ll have a chance to engage further with our two speakers, but also to procure this book.

What’s so interesting about it our top the title for tonight is Orwell’s last Fortress of Freedom. I fear this may be false advertising. I’ve had more than one person come up to me and say, oh, I see the talk tonight is about George Orwell. Actually it’s not but Orwell is not the only one, but he is one of the people who who captures this idea that there’s so many ways in which human beings can be coerced but they’re always looked to be one last fortress of freedom, which is the interiority of the mind.

Whatever else they can do to you, they can’t get into the mind itself. And yet it turns out that science and technology are invading. Even that last fortress with. Enormous po potential consequences for criminal law, but other aspects of law, but also of society in general. And I’m, I fear tonight is a bit of a dystopia but but that doesn’t mean it won’t be hugely en enjoyable to hear about the developments and the implications.

So with that, I’m going to turn it over to professor Farahany. I give it to you for our site. First of all, thank you so much for having me and for the patience in getting this scheduled. Professor McConnell was kind enough to invite me a while ago, but we wanted to wait until Professor Greely was back on campus from his.

Trimester, is that what you call them here? Quarter quarter away. I’m a Dartmouth grad and so we called them trimesters. But especially because he was there with me throughout each chapter of writing the book, cheering me along and encouraging me to finally get it done. But I’m glad that I didn’t get it done sooner than I did because it just turned out that publishing it at the right moment when ChatGPT had been released, when AI was really taking off, and when people understood the broader implications of technology, it was the right time.

So what is this book about? It’s called The Battle for Your Brain, defending the Right to Think Freely in the Age of Neurotechnology. And for more than a decade now, I’ve been in many ways writing about neurotechnology, and I’ll explain what I mean by that. Oftentimes as a lens to understand existing laws.

And so back in the day, in fact, while I was visiting here. I wrote an article called Incriminating Thoughts, which really looked at the ways in which our commitments to the privilege against self-incrimination existed and whether the privilege against self-incrimination could survive neurotechnology once we could peer into a person’s brain since the concept is to protect you from being forced to testify against yourself.

And what would happen once we could decode things directly from your brain if you couldn’t if you weren’t being asked to testify. But as I explored different dimensions of it, I watched it and I thought, for the most part, neuro technologies were not going to have widespread commercial applications until a couple of important aspects of it were solved.

And fast forward to today. First, let’s back up to 2018. I was at Wharton Business School for a conference and a person stood up in the front of the audience and he said, why are we humans such clumsy output devices? We’re really good at taking in information, but we’re going backwards rather than forwards in time in terms of capacity to express that information from our brains and to the rest of the world.

A while ago, we could type at, upwards of a hundred and some words per minute, but now with two thumbs, the very fastest typist is at 54 words per minute. Like, how was it that we’re going backwards rather than forwards and expressing what’s in our minds? And the reason he was talking about this is he was showcasing a new technology, which was embedding brain sensors into what looked like a watch.

And he was talking about whether or not it might be possible for us to do things like type. Swipe with our minds rather than doing so with our hands. He said, imagine if we could operate octopus like tentacles with our minds instead of these clumsy, sledgehammer like devices at the ends of our arms. And what he was showcasing was one of the first neural interface devices that I had seen for consumer applications that I thought that had the real potential to go widespread.

So I’d been experimenting with these neuro technologies, these wearable neuro technologies, not the Elon Musk drill a hole in your skull and put electrodes into your, the brain kind. But these kinds, that would be like a forehead band that would have EEG electroencephalography sensors embedded in them.

It could pick up very low resolution signals from your brain, which for the most part. Didn’t tell you much and they were just entertainment based, where maybe they could tell if you were paying attention, if your mind was wandering, but he was talking about something altogether different. First, the thing that I thought would limit neurotechnology from going widespread, which is none of us are gonna walk around with stupid forehead looking bands across the front of our heads was being addressed by embedding it into everyday technology.

And the second, instead of limiting it for an application that was something like meditation, which is what most of these products had been designed for until then was very limited use cases. He was talking about the killer app, which was having it become the new way that we would interface with all of the rest of our technology to replace peripheral devices like a mouse and a keyboard, and instead.

Be able to think about typing, swiping, moving, and interacting with a device like a virtual reality or an augmented reality device. And over time, the goal was to make all of our interactions more seamless. That was 2018. I was absolutely convinced that was the pivotal acquisition in consumer neurotechnology.

I was also equally convinced that it was going to be Apple that would acquire them. I was right about it being pivotal. I was wrong about the company. It was Meta who acquired them for about a billion dollars a year later. And they have this fall released their first device, which is their new AR sunglasses, AR glasses, that interface with their EMG device.

Now, I was wrong about Apple in that instance, but it turns out now. Later I found out they had pitched it to Apple. Apple couldn’t figure out how to embed it into the real estate they already had in their watch. And also Apple had their own program going on of embedding EEG sensors into AirPods. And so their patent for releasing EEG sensors into AirPods was released about a year and a half ago.

Now, a lot of this was still relatively unremarkable until generative AI hit the market. And the reason is because every person’s brain operates a little bit differently. So what exactly can you pick up from a noisy EEG signal? Not that much other than whether a person is paying attention, if their mind is wandering, if they’re happy, if they’re sad.

Even rough things like you could show somebody an image and reconstruct very hazy views of what they’re looking at or thinking about. But on device. Large language models and especially multimodal large language models that are now being integrated with brain foundation models enable a whole next generation of capacity.

Because while in the beginning, straight out of the box, all that might be possible is basic emotions, basic fatigue levels. Over time, the device can learn you. And so Meta and their Meta Connect conference recently talked about how these devices over time will learn you as a co-evolution with technology.

In my book, not only do I talk about the developments that are happening in this space, but the current ways in which the technology both has promise, extraordinary promise, both for our seamless integration of technology, but also the possibilities for health and wellness. Think about it, many of you might have smart devices that you’re wearing right now.

But those smart devices don’t actually target or enable you to know virtually anything about what’s happening in your brain, even though you’re here at Stanford University and probably care a lot about your brains and what’s inside of them. The capacity to finally have those insights might enable us to do a lot of things like detect epileptic seizures, minutes up to an hour before they occur.

These are already new insights that have been developed in some research labs across the world, more with respect to PTSD, Alzheimer’s depression. Already, there are treatments for depression or take Parkinson’s disease where some people who have tremor don’t respond well to things like medications.

Being able to pick up the precise neural signature that comes down your arm to your wrist and send an inhibitory response back to your brain can stop. Tremor in its tracks, and there’s already a company that has FDA approval for that device called Kayla iq, which is pretty extraordinary. These are just some of the amazing advances that neurotechnology makes possible.

But in my book, I also talk about the ways it’s being misused, like how in employment settings that some employers are already tracking people’s fatigue levels and brain activity. How in China factory workers are required to wear EEG headsets to track their brain activity level and that they’ve been punished based on what that reveals.

Or in educational settings across the world, including in China, where in 2019, the Wall Street Journal published an expose showing how students in a classroom, fifth grade students were required to have their attention levels tracked with headsets that were tracking and sending real time information to the teachers in the front of the classroom, to the state, to their parents, and that they were punished based on what their brain activity revealed.

Set aside for a moment if it even showed any real data, and just imagine being a child, being required and being told that your brain activity is being monitored by the state and in an authoritarian regime, that last fortress of privacy being breached. I talk about how it’s already been used by police departments across the world to test for recognition memory of different crime scene details that a person shouldn’t know, or for example, cognitive warfare, the sixth domain of warfare that NATO has invested substantial resources to try to understand and track how the Havana syndrome, which the joint statement of the intelligence communities claimed they can’t fully write off as being something that is mass psychogenic illness.

That there may be a foreign adversary, at least behind a couple dozen of those cases, and that there may be weapons that are being developed to target and disorient the brain. These kinds of things I document throughout the book show not just what the harms could be, but how it’s already being used, and the question is just one of scale.

What happens when this becomes wide scale across the world? And what I propose is that we need in the modern era to recognize a right to cognitive liberty, a right to self-determination over our brain and mental experiences. And as I go through the book, I explained this isn’t unique to neuro technologies.

So many digital technologies are really aimed at trying to decode and interpret and understand and manipulate and shift and change what’s in our brain and what’s in our mental experiences. And so the result shouldn’t be to target or to try to regulate just neuro technologies, but to recognize that what we need in the modern era are a set of protections.

And I advocate in the book as a starting place, that we develop a global norm around a set of human rights that need to be updated to recognize the right to cognitive liberty. And that’s both a right to self-determination. That is a right to access and change our brains if we choose to do but also a right from interference with our mental privacy.
Which includes a right against interference with the automatic processes in our brain and a right to freedom of thought, a recognized international human right, which has primarily been interpreted to be about a right of religious beliefs and religious freedom. But to recognize that should also include a right against interference, manipulation, and punishment of our thoughts.

There’s a lot of traction happening on this in the world at this point. UNESCO has the first global standard on the ethics of neurotechnology, which embraces this framework, which is being debated by the political parties. This May in Paris, the first draft of which was issued, and which I was a part of where experts from around the world came together to develop that.

The OECD has a framework to inform this. In the US there have been two laws that have been passed. Laws and the contents of which I don’t totally agree with, but at least try to take a step in the direction of protecting neural data by treating it as sensitive data under California Consumer Protection Act, as well as under Colorado.

And there is a lot of activity happening at places like the Uniform Laws Commission to try to recognize the right to privacy, a right to mental privacy in the modern era. So I believe that gives us a starting place for the conversation. And I’ll turn it over to Professor Greely.

Thank you Nita. So actually Michael, I will say something about Orwell, one of my favorite authors. I’d spent winter quarter this year teaching at Stanford and Madrid, an undergraduate program that was wonderful, but it caused me to reread homage to Catalonia, which I think is the best piece of nonfiction published in the 20th century.

It’s Orwell’s memoir. Thoughts about his time in the Spanish Civil War. He was an amazing writer who radiated honesty and sincerity, quirkiness occasionally, and I do think it is one of the great ironies that he gets an adjective or well in. But the adjectives about Ev is describing everything he hated and fought against.

But the idea that the interior of your brain is your last refuge, even with torture, which happens in 1984, tortured can’t always get the truth out of people. Torture sometimes can get people to say anything they want. The torture, they think the torture wants them to say so that they’ll stop. But no one could ever really see the inside of your brain.

Now. I think native, we met in 2006, about 20 years ago, five or six. She was a newly minted jd, PhD and a fellow at Vanderbilt Law School. And when we first met, and she was already thinking about these things, she spent a year as a visitor here at Stanford in 2011 where I think you mainly wrote the first two big neuro articles you did.

Incriminating thoughts about the application of neuro technologies, or Let me switch that around. About the application of the Fifth Amendment’s privilege from self-incrimination in the context of neuro technologies and searching secrets about the application of the Fourth Amendment’s prohibition on unreasonable searches and seizures in the context of neurotechnology.

I think the Sixth Amendment article came later. But she’s long been interested in these issues. Around the middle of the aughts was when Neuroethics started. Picking up as a subject of substantial discussion, I think largely because of a technology called Functional Magnetic Resonance Imaging that was producing scores hundreds of articles a month practically, where researchers who had access to an MRI machine would round up the usual subjects, the undergrad psych majors, and stick ’em in the scanner and say things or stimulate them, have them react to something and see which parts of their brains lit up.

And so you had findings about the sight of true love. And the lo, the place where nuns felt a mystical, the place in their brains. When nuns felt a mys mystical union with Christ, anything you could imagine that somebody thought they could get a publication from, and they had 10 undergrad psych majors ended up getting published.

It was a really exciting time, and it was a time when both on the utopian side and the dystopian side, people had huge expectations about where the technology would go following Nita’s work and interacting with her regularly over the last 19 years, I feel my role has been the not really exciting role of being the anchor or the thing that’s slowing her down.

The thing that says Anita it’s not like we haven’t read mines all the time. I’m reading your minds right now. I like you, you’re nodding at all the right times. I don’t see anybody buying shoes on the internet as far as I can tell, but we read each other’s minds. We do it in part for evolutionary, re, re reasons.

It is an important survival trait. You really are better off if you know whether the person approaching you with a club is about to hit you or about to help you. The problem with our mind reading is we know it’s not very good. Same thing with lie detection, emotional detection, all sorts of things.

We read these all the time. There are people who aren’t very good at it and it’s viewed as a disability. If you’re not able to read other people’s emotions, don’t have what the psychologists, for reasons I don’t understand, have termed a theory of the mind. It is a significantly limiting factor in how well you do in society.

It’s not completely disabling or completely doesn’t make you completely impossible to, to succeed, but it is a problem. So we read minds all the time. We’ve been doing this. The idea of somebody trying to take over our mind and shape it, change our thoughts, control our thoughts. How many of you are parents?

Yeah, so that was not only something we did, it was our job, right? It was our duty to try to change these infants into we hoped good moral people. They never signed informed consent. As far as I can recall, I am a teacher. I try to shape the minds of my students. I’m not sure I ever have any success in shaping the minds of my students, and I don’t really care so much about shaping their minds in any particular ideological or political direction.

I wanna shape them into being more skeptical and more hard-edged and more inquisitive and more rigorous. And with a better sense of humor and all those good things. So these are all things we’ve done. What’s different, Nita and Nita would say, what’s different is technologies are allowing us to do this better.

And there’s a big difference between doing lie detection, that’s 54% accurate, which is what the general psychological studies of mock juries show jurors are. The good news is we’re better than chance at detecting lies. The bad news is we’re only like 54% out of a hundred percent. We’re not much better than chance.

Some people in the initial trials turned out to be 80% accurate, so somebody then did the thing of bringing them back six months later and discovered there was absolutely zero correlation between how well you did the first time and how well you did the second time. It was all just a chance distribution.

And then my drag Anita was, yeah, but. People aren’t gonna want electrodes inserted into their brains, despite what Elon says, I don’t think this is gonna become a big deal. And she said, yeah, but watch the other technologies. And she showed me some of the technologies. I think my favorite Nita was the cat ears.

My, my favorite too. So yeah, you wear a little hat, a little cap, and it’s got cat ears and they can be all the way flat or they can be all the way up, or they can be halfway up and you can train them to respond to how you are thinking. You don’t think I want the ears up? No, it’s actually based on it’s based on your concentration levels.

And so it’s whether you’re paying attention. Your mind is wandering. And so just think about that right now by just, I handed of them out as you walked in the door and it’s just, are you paying attention? Is your mind wandering? We could just look across. Didn’t you give them to your class at one point?

I wanted to, the IRB had some concerns. Yeah, it’s not research. Exactly. Anyway. People doing video games through their brains, people doing a variety of things, I still think it is not there. Nita, I think, thinks it is not there yet, but she’s now convinced me that more of it is coming than I had expected.

And it will just, I read minds by reading faces more than anything else. Faces and body language I use to read minds well. How we read minds is by the physiological signs, the signals people give off. And if somebody’s monitoring, say your pulse. Or your blood pressure or other things that is some information about what’s going on in your brain.

It’s not great information yet, but as the detectors get better, that Apple Watch might be doing a lot of things that you don’t necessarily realize it’s doing. So I am reminded of the great Reggie Jackson wonderful baseball player. I hated him when he was a Yankee, but I’m a California Angels fan. Or I was.

And he ca he ended his career at the Angels. Still worked his butt off as an old guy. It was very impressive. And at one point in a press conference, he was asked Mr. Jackson, what about these rumors you’re gonna retire? And he said there’s no truth in him. You guys write anything you want and you will.

There’s no truth in him, but you guys keep writing it. ’cause sooner or later you’re gonna be right. And in fact, he retired. I think sooner or later Nita’s gonna be right. I’m beginning to worry that it’s sooner than I expected and sooner than I would like. So the work on trying to come up with ideas, standards, legislation, model, state legislation, international guidelines, I think all that is useful.

It will be inefficient, unproductive, frustrating. It will lead to fights and debates and people taking positions so that they get their names in the papers rather than somebody else. It’ll lead to all of the sorts of problems that we have that are called Human Society, but I think it is a important way to try to get on top of this before we are completely buried by it.

And what I applaud what Nita is doing and I keep waiting for her next book which used to be called on Cognitive Liberty, but is now gonna be something else. But it’s the step beyond the battle for your brain. So I’ll shut up now. And just one last plug, homage to Catalonia Spectacular book.

Read it

well we’re going to open this up for you to come down and ask questions, but let me lob a few questions that an Nita and Hank just to get the, to, to prime the pump. And the first is my as. I’m going to make a prediction and see, first of all, maybe it’ll be wrong, but what your reaction is to it.

And so here’s my prediction. Whatever is happening in China, I’m talking what will happen in the land of the free. I. Where is that? That, I don’t know if you saw this comment recently about expected thoughts that Rubio made? Actually I’m going in a slightly different direction. Oh, okay.

I was going to predict, I was going to predict that it isn’t going to be yeah, there’ll be little bits of governmental coercion probably, and they may be very serious and the courts may be very reacting to them, but my prediction is that people are going to give up their cognitive freedom of their own free will.

I think about what has already happened to privacy. If you had told, if we had a conference like this 50 years ago and asked, and, oh, there’s this technology in which every place you go. Is going to be on record. The central authorities will have a way to know every, and they’re going to know, have vast amounts of data about your tastes, what you like and so forth.

I think we all would’ve said, oh, that’s terrible. That’s such an invasion of our privacy. Let’s pass laws to make sure that that this can’t happen. It has happened because we all go on Google and we find location services to be extremely convenient for things that we want, and so we consent to having every place we go.

Recorded and we find it very convenient that we can shop around and we can ask questions. We can explore products, even though, and we’re perfectly aware of the fact that this reveals a great deal about our preferences and can complete a profile of us, which is an enormous invasion of our privacy.

But nobody is doing this to us. And we may be given they can have something pop up on the screen as often as they want, saying, do you do you consent to this? Or do you want to take, and we consent because we want the benefits that we get from these privacy invasion technologies. And I am guessing that people you mentioned Meta’s product having to do with alternative reality, people will really enjoy this, and they will voluntarily put on the headband.

And my guess is that there are all kinds of ways in which this is going to enhance our lives. And that this is, I don’t know how to guarantee cognitive liberty if we want to, if we’re all just gonna give it up. So I hope that the version of the book that is available after this is the paperback version because the editors were kind enough to let me add a new chapter to the paperback version since ChatGPT had been launched.

Right around the time at a lot of the generative AI studies. Showed how quickly things were changing. And the name of that chapter is Normalizing Neural Surveillance. And it talks about exactly that. It talks about how people how we normalize it and how we accept it and assimilate it.

And I’ll give you a couple examples that are in the book. One of them is in an earlier chapter, so whichever version is there, it’ll underscore the point that Professor McConnell is making, which is a number of years ago. IKEA decided to run a fun marketing experiment. They were selling limited edition rugs in their Brussels store.

And they had entered into these agreements with really high-end artists to sell these limited edition rugs. But what was happening, the idea was to democratize art, right? Bring extraordinary art to affordable prices to people. But people would go into the IKEA and they would buy the rugs and sell them on eBay for many times what IKEA was selling them for.

And they were very frustrated. So they decided to hire a marketing company to come in and figure out if there was some way that they can combat this, but also bring attention to what they were doing. And so they allowed people to come into the store in Brussels to look at the rugs, but they had to put on a headset.

The headset was would then, while they were looking at the rug, would project a score up on the screen in front of them, and it was purportedly measuring their EEG activity, their electroencephalography activity from their brain in order to tell whether they loved the rug or not. And only if they really loved the rug would they be able to purchase it.

Nobody objected. They all thought it was great, and they put on the headsets and they really enjoyed it. And nobody asked, what are you doing with my data? How do I think about this? Or, L’Oreal entered into a partnership with a motive, one of the leading consumer neurotech companies where you can walk up to a perfume counter and put on a headset that measures your brain activity in response to a custom scent that they will make for you and will tell which one you really love and, light you up the most.

For sensory pleasure, for maximizing sensory pleasure and. People happily walk up to the counters and give away their brain data. Or the more recently at the Museum of Modern Art, where the first generative AI art was on display and in a partnership with Neuro Electrics, you could walk up and look at the art and your brain activity could be projected up next to it so that you could have, what does brain activity look like in your response to looking at art?

And that’s happened in hundreds now of museums across the country where people just think it’s cool and they just give away their brain activity. And that’s not even an exchange for. Anything, right? People just do it because of the novelty or the fun of it without even thinking about what they’re giving away.

Or there are Facebook groups already where people share their brain activity with each other to see who achieves the best gamma activity in meditation. So yeah, I think people will give it away for free because they don’t recognize the risks. And even if they do res recognize the risks, most of the kinds of products that they might want to gain access to or might have to gain access to.

The only way you can interact with your computer in the next generation is by signing away your right to the data. Means that you won’t have any choice but to do and then that data is sold to third parties and use it and mine it for other purposes. Now, how do you end up with a right to cognitive liberty in a world of which people trade it?

Freely for exchange of goods and services. And I go into that in the book taking what I think is a trying to appeal to the staunchest libertarian to say there is a world in which this asymmetry of power, this problem, this contract of adherence, the fact that even like in the employment setting, you cannot achieve the kind of freedom of contract that you would wanna achieve.

If your boss has access to your brain activity and you don’t have access in return. You can’t even negotiate into a fair negotiation settlement. All of that suggests that I think we could achieve a right to cognitive liberty in a world in which up until now what we see is people having to give away a huge amount of data about themselves in exchange for any goods and services.

This remind the, what the discussion reminds me a little bit of another area I work in genetics. How many of you have sent your DNA off to 23 and me ancestry, DNA you should probably delete that data now that they’re going under. Yeah. So there’s a way to do that. And you, I learned that my ancestry is mainly European, which was not actually a great shock to me.

They will tell you things like your likely eye color, which you know, is also not a deep surprise to me. And a variety of other traits. It’s things like, do you like the smell of this perfume falls into that category? It seems to me it’s interesting that they’re doing this from your brain, but your brain is telling you whether you like it or not, but.

That then leads to data that can be used for lots of other purposes. When 23 and Me or Ancestry or the others are doing that genetic analysis and you’re getting back your ethnic heritage or you’re finding a second cousin once removed, they’re actually looking at somewhere between 800,000 and a million different little bits of your DNA that can reveal other things if, when they, when you’re checking out the perfume to see if you truly, really love it, whether you think you love it or not.

This will tell you whether you truly, in your deep brain essence. Love it. There’s a lot of other data that’s being collected and so ISI still am a little s I do not yet see the Killer app to use Silicon Valley terms, the one that will get us all hooked up. It may be there, and I think interfacing with your computer may be it.

I. Actually, I think interfacing with your TV remote control. If I can just think to the tv, no, not that I want that and not have to figure out which buttons on the controller or find the controller, which is another big problem. That maybe that’s the killer app, but you could also have death by a thousand cuts where there is no one killer app.

But there are a lot of little things that people get intrigued enough by and don’t realize necessarily that it’s not just, it’s not just conveying information about what odors you like, what sense you like, but conveying potentially other and deeper information. I think if you think about cognitive liberty as going beyond neurotechnology, which the book does, right?

Which is to show you the easiest use case that most people can. See clearly which is literally decoding directly from your brain what you’re thinking. And then back it up and think about your interactions with generative ai. Right now, right? Theory of mind. Generative AI has a very powerful theory of mind of users that interact with it.

Character ai, where it effectively has had, conversations leading to a child who’s committed suicide in response to character ai. Part of that is because of the way that the chat bots interact with the person, the way that they, are designed to tap into all kinds of biases of the individual.

And so as you start to think about the broad swath of data that’s being collected about brain and mental experiences and how they’re designed, the question is, is there a killer app out there already? I’m pretty sure there is. It’s generative ai. Everybody’s using it already, and then increasingly those are interacting with all of the rest of our sensors and devices.

So I don’t think you have to wait for neurotechnology to go widespread to consider what the question of whether in the modern era, whether there is a need for a right to cognitive liberty, a right to self-determination over our brain and mental experiences. The question is just simply how do you enact it?

What are the contours of it? What are the limits of it that still allow people to do things like have self-determination and choose if they want to trade their brain activity or if they wanna sell their brain activity, or if they wanna sell their personal data, how they might be able to do whereas other people might be able to enjoy a more robust right to mental privacy.

So far it’s been mostly dystopian. So let me ask a question about progress. Is the, are these developments that you’re writing about Nita the same as what I hear about. With people who have lost a limb or are paralytic and are able to use their minds directly to accomplish things when they no longer have hands or feet that they are able to control.

Is it the same technology such that if we could, magically control it because of the dystopian implications, we would lose out on these pretty remarkable developments? Yeah, so I, I think I strike a pretty. Optimistic tone despite the dystopian conversation in the book, which is to say that I actually think this technology can be the most empowering technology for humanity if we’re able to steer it, to make the choices that we have to make, to enable us to do and so is it the same technology? It’s of the same class of technology right now. The remarkable advances that you’re reading about are in clinical trials for implanted neurotechnology, which has much greater resolution and is very limited in the class of people who have access to it right now. But let’s talk about one of the most promising of those that’s moving into phase three clinical trials instead of.

Drilling a hole in the head and inserting technology. They in a cath lab, are able to essentially put in a stent that listens to brain activity and in particular motor activity as it leads to the brain. This is a company called Syncro that has something called the Stent Road. And the ENTRAY is about to have hundreds at least, of patients who are implanted because it’s very easy to do it in a cath lab very quickly and much more safely.

And it’s enabled people to do things like type on their computer or more recently, one of the exciting things that they showed at the GTC conference a couple of months ago, was to have a generative AI model that was integrated with the entro. With a virtual reality headset. So they have an implanted device, and then they have the VR headset, which was integrated with the environment of the person where they could do things like dispense food from the dog dispenser so that they could feed their dog and type and interact with a whole lot of things in their environment that they couldn’t otherwise do.

And this person has complete paralysis. That’s really exciting. Or, in Switzerland, one of the companies was able to, for somebody who hasn’t been able to walk for years doing an accident, be able to take the signal from the brain. Then bypass the area of the spinal cord that was damaged and send the signal to the rest of the body to enable them to be able to walk again.

That’s extraordinary. And the stent tro they talk about in their new partnership with NVIDIA developing something called the Hollow Scan. So one of the limitations of some of this technology, the reason it isn’t as widespread yet, is because it’s hard to get brain data that’s associated with contextual data, like labeled data.

Think about the way AI models work is in part you wanna have a lot of labeled data. So for every brain state, there isn’t an associated label with it. So they’ve entered into a partnership with Nvidia, where people who have the implanted neurotechnology will be in basically a 360 hollow scan. Every single detail of their environment will be encoded into the model.

Along with the brain activity to train brain foundation models to learn what brain signals mean, and then that will make a very powerful model of being able to decode the brain. That’s exciting. And we could talk about how that’s also really dystopian and terrifying. If you wanna go there again. Can I just ask a quick follow up and before turning to Hank, which is this does seem, these seem like an incredibly wonderful possible developments.

Is it possible for you in a small number of sentences to help those of us for whom this just sounds like science fiction to understand, how could this possibly be true that you think something and it causes some physical thing else, or how could that possibly work? Just thought something and it caused vibrations in the air that communicated to these people?

The simplest way is every time you think neurons are firing in your brain that give off tiny electrical discharge with any particular thought, hundreds of thousands of neurons are firing off at the same time, giving off bigger patterns of electrical discharge, which can be measured by external sensors like electro ence sensors.

And then machine learning algorithms or AI have been trained to understand what those different electrical signals and tiny difference of amplitudes and waves mean to interpret it. And so that’s the kind of most basic way of understanding it is that your brain is electrical circuitry and there is a decoding machine that’s decoding what all of those electrical signals mean and making sense of it.

And sometimes those are motor signals. Move your arm. Sometimes those are signals like type out this sentence, but every brain state that you have is represented. Through those different firings and can be decoded as well. And what we’re seeing, I think, is a couple of different curves changing in ways that have major effects.

If you are a quadriplegic or if you have LockedIn syndrome, you’re willing to do a lot and risk a lot in order to be able to have some control. You’re willing to have holes drilled in your brain, in your skull and electrodes inserted, and maybe reinserted over time and spend hours, spend infinite number of hours training yourself and training the computer to use because it’s really important to you.

But that’s really hard. Things like the cat ears, that’s just a surface. EEG, that’s easy to get. Cheap, easy, but it doesn’t tell you very much. It’s very limited in how much it tells you. If you really wanna learn a lot, you’d like to have. Hundreds of thousands of electrodes inside somebody’s brain to pick up all this.

The problem is if you don’t have, if you’re not quadriplegic, you probably don’t want all those holes in your head and all those electrodes in your brain. But things like the the stent like Centro, that’s making, that’s lowering the barrier, that’s making it cheaper, easier, and less disconcerting to have something in your brain.

Does it mean that I would do it right now? No. But if I’m diagnosed with Parkinson’s disease or a movement disorder of some sort, and this could help me. Yeah. And, if it could guarantee that I’d have a decent tennis backhand, I might be there already. So as the technology gets better at less intrusively and invasively reading more detail.

It opens up the market more and more broadly. I think there’s also a huge augmentation side that people are likely to go for, which is so we’re talking about a form of augmentation, right? For people who are paraplegic or things like that, right? But then for the person who could have a stent TRO with minimal side effects, if it’s proven safe and it has an on-device, large language model that’s multimodal, that’s able to pick up subconscious activity right before the conscious activity, the capacity for human thought and human output.

Massively increases potentially. And I think at least some of the companies, some of the founders, they believe that there will be a much bigger market for even implanted neurotechnology than than what Professor Greeley’s comments might suggest. And I think that’s likely to be true as well.

The question is how much power can you get out of wearable neurotechnology to enable some of those benefits to be realized? I think that’s getting much, much more powerful, everybody too, with the convergence of AI and neurotechnology and when you have tons of sensors inside of an AirPod as opposed to the one or two surface electrodes that some of the earlier devices had.

So still much lower fidelity than having something that’s inside the brain or inside of a stent rde. But the capacity for augmentation, I think is one of the reasons why people will opt for it well beyond just neural interface technology. If it works, it already is. I’m sure people in the audience have questions.

Please come down to the two microphones and speak into it. Let’s see if that one’s on. Do can you just pull it? Whatever. Okay. My brains to you there. We So go ahead. May I thank you for the talks. I just had a particular technical question that I wonder if you could give some insight to, and that’s regarding, resolution and coverage because even the more advanced implants neuro Lynx is in one location and it’s still, they’re seeing the com combined integrated output of, I don’t know, hundreds of thousands of neurons. Obviously if you can teach yourself, teach it to type your indivi, 26 letters that’s some degree of resolution.

But the question of of the resolution you need to get to really advanced stuff. And secondly, excuse me, secondly, coverage over a larger part of the brain. That was my question. Thank you. Yeah, it’s a good question. And the answer is an imperfect one, which is, unclear. Right now, most of the implants are trying to decode motor activity, which is intent, like just a particular region of the brain trying to connect up your intention to move or to swipe or to type.

Some of the other implants what they talked about at the GTC conference with Syncro talked about trying to go back a little further, which is rather than just picking up intentional communication to pick up basically the subconscious before the intention so that it would be more seamless of an interaction rather than having to form the intentionality to move, it could pick up what the signal is right before you do that, and the AI could decode that so that it’s a more seamless kind of interaction.

Then the other thing I’ll say about it, which is why I say it’s somewhat unclear, is that as it moves from motor activity of intentional speech to trying to cover more areas for language at least there was a study that came out with the first GPT model, GPT one. This was Alex Huth study out of BT Austin in April of 2023.

And it was very interesting because what they found was that there was redundant representation of language across the brain. It didn’t matter which region of the brain they decoded, they got the same resolution with their classifier. And that redundant representation suggests that you might not need to have as, as complete coverage over the brain in order to have very high resolution to coding.

It may be that it could be just particular regions opening up the possibilities of wearable, even neuro technologies on region specific areas that might be sufficient with powerful enough classifiers. A little unclear. But that at least gives you a few ways into the technical answers that you’re asking about.

Henry.

Hi. Thank you so much. There are a lot of questions I could ask, but I’ll stick with a fairly straightforward one. When you were talking about the couple of us jurisdictions that have actually passed neural data laws, you were willing to only award California and Colorado a participation trophy. What are they doing wrong?

So I, in the same way that I think this isn’t just about neuro technologies, I think carving out the data derived only from neuro technologies for special protections puts an unnecessary burden on research and advancement of a particular technology when the problem is a much bigger one. It’s about the interaction of digital technologies more broadly with our brains and mental experiences.

And so what those laws have done is they have these really surprising preambles that mischaracterize a lot of what neuro technologies cannot do. They single out the technology. Then they create a disproportionate burden and a carve out. In California there’s a carve out for non neural data. So things like eye tracking data or other kinds of inferences about brain and mental experiences, even from sensors themselves.

The tech companies, I think, lobbied effectively to carve those out. So I think the laws are good participation not well conceived. The Uniform Laws Commission has a study committee that has been launched to study the issue of mental privacy more broadly and to see if there’s a better, more uniform approach that we might be able to achieve.
And I think that would be good. I don’t know what the answer will be because it takes a lot of people with a lot of expertise to approach the issue and I think, figure out how to address it. But I’m encouraged that there’s a broad set of stakeholders to the table to try to work through that issue.

Great. Keep Harry. I’ll give you an example of, I think this is not covered by the California and Colorado statutes. If your iPhone, I was wrote a paper with somebody who was involved in a startup that was going to be a mental health device based on how you used your iPhone to measure whether you were currently having a depressive episode or whether you were manic.

And it wasn’t looking at what words you typed, it looked at how you handled the phone, how you swiped, and how you clicked. And it used that. Now they had several good peer reviewed publications like almost every startup. It ultimately failed, but I don’t think the idea was wrong. And that’s the sort of thing I think that you’re saying would, is not picked up.

Yeah. All of the cognitive biometrics, like all of the ways in which, in, in many of those biometrics are more precise right now at making inferences about brain and mental states. The neuro technologies are to Hank’s point. And those are all treated differently and that. Legally, if what you’re interested in is protecting mental privacy, you wouldn’t carve out one single data type to achieve that.

Thanks. Hey, thank you so much for scaring me. I have a common question. I don’t know. This is a scary for several reasons. So I probably one of the few people who have one of these, oh, I love it. And it’s by accident. I would like to gain credit. What happened is I was a student at Stanford in Double E and I graduated in 2012.

So when the first smartphones came out, were expensive for a poor grad student, so I didn’t buy one. By the time I went to the workforce, all the negative things were beginning to be known and then snow, then seal the deal. Yeah. And in 2014, so Jonathan Mayer, I believe his name is a student who was a student here in computer science.

He did a course in Coursera where he announced all the dangers about just collecting metadata. So it made me even more scared. I said, dude, something is gonna happen. And guess what? Not only nothing happened. Survey, sorry. The ones that I have looked at, depending how you ask the question, somewhere between 80 and 90% of Americans are okay with giving it out for free services.

The few attempts that I’m aware that are serious in Europe, I don’t know if they’re worse than the disease because GDPR in the other things, they put a lot of power into the bureaucrats. And something that you just said earlier in California, lawmakers are not experts in brain. So whatever they come up with, you don’t know if it’s going to be better or worse than what they have now.

So I don’t know really what, it’s very scary. Anyway, so yeah. Thank you. It is interesting to see how if you survey people in the US that many of them say oh, I, I love personal advertisements. Go ahead and take all of my data to do that. If you start to help people really understand what it means to have cognitive steering happening, or if you ask them if they’re okay with their kids being addicted to technologies or social media or the inability to come off of certain platforms, they’re not okay with it.

So I think it depends on how you frame or ask the question. All of that happens by the same data, right? And so people might like personalized recommendations to be able to buy a particular product, but they don’t like being addicted to their technology. They don’t like the cognitive steering.

They’re very disappointed in the Cambridge Analytica, scandal. All of those are the same mechanisms, right? And so the question is how do you get some of the things people like without all of the downside risk that come with it? And let me add one thing to that. It’s timing on technology regulation is such a tricky matter.

There was a engineer named Colin Ridge who. Think 20, 30 years ago formulated the Col Ridge dilemma. The easiest time to regulate a technology is at the very beginning, before it’s built up vested interests that want to keep it going. But the problem with that is, you know very little about how the technology’s actually gonna play out by the time you know enough about how a technology is gonna play out.

It’s been out in the world for a while, and consumers, producers, intermediaries, there are a whole bunch of vested interests that make, when you finally get enough information to regulate intelligently, the political barriers to regulation have increased substantially. So that’s why, although I think we’re not at either the utopian or the dystopian edges yet this is the right time to start trying to set these frameworks.

Last year. Yeah. It’s, always premature until it’s too late. That’s just the, that’s the way life is. We have time for maybe one more question. We go to six 30. Oh, what’s You have 30 minutes. Oh, we have 30 more minutes. Excellent. Excellent. We’ve got plenty of time for talking.

Wonderful. We wait a wide out there. Yeah. You have to keep talking to earn your mind. Okay. Thank you. Thank you for the presentation. I forgot how you framed it. The the last chapter of the paperback right to cog, what was it? The right to Cognitive liberty. To cognitive liberty. I was fortunate that you just mentioned this phrase, cognitive steering.

In your last answer, presumably, would this right to cognitive liberty entail or include. Freedom from cognitive steering or how would this concept of cognitive steering figure in with it, that framework? How do you Yeah, its a good question. How you deal with, that’s a good, so the last chapter is actually normalizing neural surveillance, but the original hardback does the right to cognitive liberty and I can’t remember if it’s chapter eight or chapter nine, I think it’s chapter eight.

It’s a chapter called Mental manipulation which goes into the broad concept of like how do you even figure out where you would draw the line, right? Because there’s so many persuasive technologies which might not legally be. Manipulation in a problematic way. I do argue that freedom of thought should include a right against interference with manipulation and punishment for our thoughts and manipulation is part of that.

But how you define manipulation is the hard part. And I come down with legally a narrower definition of manipulation than I think morally I would come down with. Which is to say there, back to Hank’s original point, like one of the stories I tell in that chapter. As I opened with my daughter, Electra at the time, who was two, I think at the time.

I wrote the story and she comes into the kitchen and, she’s just forming language at this point. And she comes up to me while I’m standing next to the refrigerator and she says, mommy, would I like a Popsicle? And I said, I don’t know. Ekra. Would you like a Popsicle trying to correct her, language?

And she said, oh, yes, thank you mommy. And reaches for the freezer door. And I was totally duped, and we all laugh at that, right? And I start the chapter there to say, there is something delightful about the moment that a child goes from what is a theory of mind to developing the capacity for persuasion of others, which is part of what we learned to do once you have a robust theory of mind to the Cambridge Analytica scandal, right?

Where we are steered or, information is misused in ways to try to filter or to give us misinformation or to try to get us to vote in a particular way or something like that. Or, to have constructs in video games or constructs with notifications that are designed to hijack our brain.

Or a bunch of studies that came out from TikTok in China where they were pairing short form video together with their algorithm and figuring out what was when could they short circuit the brain to push you into automatic thinking and override self-control. And that was when they were going to.

Like that’s how they were gonna perfect the algorithm is to get you to move into automatic thinking and to bypass self-control, to keep you on the device over and over again. That to me is problematic. And so what I had to come up with was really, how do you define that? And I turned to some philosophers, my favorite one being Harry Frankfurt, who has a kind of theory of freedom of action, which is your capacity to act consistently with your intentions, right?

And once you override a person’s freedom of action in a way that causes harm to them, right? That we can understand that legally as manipulation, even though not every way of trying to persuade us or every way of serving up an advertisement to us, which has been true for time in millennium, is gonna be something that we could define legally as manipulation.

So I do think that cognitive liberty includes a right against manipulation, but the challenge then is. Figuring out all of the things that are happening in the digital universe that like where you’re gonna draw the line to regulate it, to say that violates freedom of thought. Whereas the rest of these things fall within our ordinary human experience, and some of it really doesn’t require anything neurotech at all.

The casinos for a long time have hired lots of good psychologists to figure out what kinds of noises and lights and buzzers and what kind of payoff rates will keep people feeding the one arm bandits forever without actually needing any kind of MRI or EEG, although I suspect they are now using MRIs in EEGs to try to get it better.

We try to manipulate people all the time. Often we think for their own good, especially if you’re a parent. Where do you, I think drawing the line is really hard.

Thanks for being such a pioneer in this field and having such a strong voice and being a. So informative. What I’m wondering is, you said that Nvidia was creating they’re trying to correlate and generalize a scan to back out the initial state of the person or what they’re dealing with all the genetic differences, with all the biases that people deal with due to their experiences and the way they’re brought up.

Do you think that a brain scan on one person if the same wave or the same frequency was found in another person, do you think that it’s fair to say that the experiences that person is having is equal if the wave is the same or the EEG is the same data, or do you think that a person’s genetic makeup, do you think that the brain, like each person’s brain is unique in that sense?

Yeah, it’s a great question. I mentioned Alex Ho’s paper that came out in April of 2023. And it caught my attention when it came out for a couple of reasons. First, it was the first paper that. Instead of the traditional machine learning models of decoding was using a generative AI model to encode and decode.

And what they did was they had participants go into A-F-M-R-I scanner and they listened to hours of podcasts while they trained the classifier and the classifier. They trained both using GPT one for encoding and decoding the model. And then they had people imagine stories and they did the same thing.

And then they basically showed the model here is stuff that you weren’t trained on. These people are listening to new podcasts interpret what it means. And it had really kind of remarkable results for being able to decode entire paragraphs of language from the brain with a very high degree of accuracy.

And this was remarkable for lots of reasons, not the least of which is that most people wouldn’t think that you could do that with FMRI data. But that doesn’t answer your question. This is the part that answers your question, which is they had this part in their paper, I had never talked to them which was about mental privacy.

And they said we then tried applying the model to somebody that it had not been trained on to figure out if it could decode their language. And it basically went down to no better than chance. So there’s very little risk of mental privacy concerns because what’s trained on one person doesn’t work on another.

And so that caught my attention and so I reached out to them and we had a conversation and I was like, oh, that’s really fascinating. And they were like, yeah, but we’re working on a transformer. And we think it’s years away. That was April right. Then we invited them to the Neuroethics working group of the US Brain Initiative to give a presentation.

The next meeting was in August and he called me up in July and he said, it turns out it’s not years away with just a minimal amount of training data, we were able to apply it to a different participant. And it is basically the same degree of accuracy. And the way most of these work is you have to spend a couple of minutes calibrating whatever the device is, every person.

So just for every person, but a tiny bit of additional data now. And it works with remarkable accuracy between them. Generative AI has completely changed the entire bandwagon. Now, you mentioned the Nvidia Syn Cron partnership, and that’s different still because that’s actually trained on each person, right?

So yes, they’re gonna have a generalized brain foundation model, but then that Brain Foundation model is on device, and it learns with just a tiny additional amount of training data like the transformer that Alex Slab has developed. It can then translate your differences and apply it to you. Will it be perfect?

Maybe not. Brain states are probabilistic. And one of the things that I think that is a little chilling, exciting, but chilling, right? This is all a double-edged, is that in a world in which you have a generative AI layer, that serving as the intermediary between subconscious thought and actions in the world, and then given a lot of research that shows that we rationalize in our conscious minds what happened in our subconscious minds or decisions that were made.

It’s entirely possible that the generative AI model will get it wrong, but you will never know it got it wrong because you’ll rationalize it as your own action, in which case, once the integration occurs. Yeah. So that’s the part that really freaks me out. So I have a new paper that’s trying to figure out how to address that through a new fiduciary model.
But even that doesn’t perfectly solve the disjunctive there. So I do think the issue of individual differences is really important. Back in 15 years ago when all this was beginning to hit big, it looked like the killer app was going to be lie detection fm I based lie detection and there are probably now 40 peer reviewed studies that found statistically significantly good results with fm I based lie detection.

That was, 80 to 85% accurate, but not 95 to 98% accurate. And some of that is due to individual differences in the brains. It sounds like we are getting better at figuring out and being able to predict those. I doubt that we will ever get perfect at it. In the area of neuroethics, we talk about this as the G two I problem the general to the individual.

And in law in particular, we’re not so much concerned about. 75% of people, we wanna know what that defendant, that individual person or that claimant who claims that he’s in constant pain is he really feeling pain or not that 75% or 85% of people with that brain signature are feeling pain. That’s gonna be one of the constraints.

How much better can we get at doing that? And it sounds like we’re getting better, faster than I would like, but that’ll be a major constraint. But it’s also, it gets even trickier. You talked about genetics. There’s a neurogeneticist named Kevin Mitchell at Trinity College Dublin, whose work I like quite a lot.

He has a book called Innate that I recommend about brain development. And he says, more or less your brain is like a third determined by your genes, and a third determined by your experience, and a third determined by chance. Brains are really complicated, but they also are like rivers in that you never step in the same one twice.

Your brain is constantly changing. If you remember tomorrow, what Nita, anything Nita said today, if you remember tomorrow, anything I said today, that’s a waste. But if you remember anything Nita said, it’s because she has made physical changes in your brain in the connections. And so you not only have differences between individuals, but you’ve got differences between individuals at different times.

Usually those probably won’t be very important. Sometimes they will we be able to tell the difference? Who knows The lady in the green shirt. As my husband knows, I am innocent of illegal education. But I did teach high school. For many years, and I kept it was the bunny ears example.

That made me stop and think also your daughter’s great. But one of the things I’m thinking about, and again, I do think this has civil liberties implications and it’s made much more complicated by the fact we’re talking about children. But if you could reverse the process and use this to help students concentrate rather than lose concentration and to get out of their automatic brain Q and our 11-year-old grandson, this would be a very useful technology for him.

Get out of the lizard brain and into the, use your common sense brain. I would think this might be very tempting in the educational world, but it’s also a little scary. I can still, I can think of what I would feel like in a classroom knowing that the teacher who I was looking at with my beatific.

Totally interested. Smile. Did not have to know that I was paying absolutely no attention to what he or she was saying, but that I was good at faking it. And if that information became available, is this, does this create a different kind of civil liberties issue? Yeah. Just maybe think that this has implications for education below the university level.

No, you’re absolutely right. And first it’s nice to see you. But so one of the first like high-end products that is out on the market is a company called Durable who has launched a set of headphones. And the reason I say it’s high end is they’re comfortable, they’re partnered with one of the top audio companies so that you can listen to music.
It’s. Great music. The music is whatever you play, but great quality for listening. And then it’s packed with electrodes around the ear. And the primary application that they’ve launched with first is attention and focus to train people to both be able to see their attention, but then to be able to improve their attention and focus over time.

There are other companies out there. There’s a company called Men, DIO, that also has a functional neuro red spectro spectroscopy one that does this. But the neural one kind of fits into the world that I’ve been talking about, which is multifunctional devices, right? You can do it while listening to music, while listening to your class, lecture, whatever it is.

You can do it at the same time. And there have been a bunch of studies that have been done, I don’t know if you saw the recent New York Times article about A DHD and whether we need to rethink what it is. The. I’ve, long wondered, is that the best approach, the kind of drug based approach bathing the brain in a drug as opposed to being able to retrain the brain through things like improved focus over time?

And there’ve been a lot of studies that have been done on children looking at whether sustained use of some of these consumer wearable devices with gamified attention and focus games could be better for children with a DHD to rewire their brains rather than to put them on drugs. And just as an example of that, like for some people, I mentioned already the Parkinson’s case where a more targeted neuro intervention is more effective.

Also for a number of people with depression there’s a device that’s on the market called flow, which also has been shown through neurostimulation to be more effective for those individuals, or at least for some subset of them, than taking drugs for depression. And there’s all kinds of questions about the effectiveness of SSRIs over time.

All that is to say yes. There will be a number of interventions like this, not just for children, but also for adults, for attention and focus over time. And the real question as you put it is how will it get applied, right? Is it something for parents and children to use at home? Is it something that children can use in the classroom but it isn’t being monitored?

Do they have a right not to use the devices? And my framework on cognitive liberty includes the right to right the right to self-determination, the right to access data about your own brain and to change it if you choose to do and that includes a right like this one, I think, which is to be able to use those technologies and make choices to do but not by having the teacher peer over the data, not by being forced to turn over that data to the school board, not by that data being sent to the state and being punished for what their brain activity reveals. What I think is gonna be so hard with all of this is whether it’s a good application or a bad application is I tell my students and some of them are here and probably heard me say this, the first two words to any answer to any question should always be, it depends ’cause it always depends.

So yes, there are employers who are requiring people to employees to wear things that tell whether they’re paying attention or not and whether they are, daydreaming. Some of those can also tell whether or not you’re getting sleepy or drowsy. If you are a long distance truck driver and you’re getting drowsy and sleepy, maybe it’s a good thing that somebody somewhere can figure that out and, I dunno, call your phone or maybe send a little shock to you to wake you up.

Probably not the shock, but the application really matters. Lie detection. I used to use my then. This was 15 years ago. My then 16-year-old daughter as an example, who would say that she’d been at the library on Saturday night and say, oh, really? Where were you Who were drove my wife crazy. Too much of a lawyer too.

Too skeptical. And, that wasn’t really appropriate, as my wife pointed out, but we have good friends who had a child about the same age, who was getting seriously into drugs and getting seriously into really bad things. If there were an easy way for parents to use lie detection in that context, that might be okay as compared to, me trying to catch my daughter out in a in a little lie.

And of course, not all 17 year olds are the same and not all. And this 2-year-old is truly terrifying. Very precocious. Okay.

Thank you so much. I have two questions. One of them is about the thought crimes and one is about the self-censoring effect. About the thought crimes. I’m pressing Hank’s point that we read minds all the time and even more, we read texts and people’s writings are like a good proxy about what people think.

And still we don’t govern really heavily what people write unless it’s some extreme cases and also because it has some public effect to it. But, people’s journals or people’s I dunno, messages and so on. So the question is do, and I know that neuro technologies might have a very direct access to people’s thoughts, but do so do you think we the concern will be grow bigger, that we regulate thought because we will have the, this direct access and that’s the main difference between.

People’s writings and people’s direct access to thoughts or is it something else? And about the self-censoring effect. So when I was like, that was like a real concern that I really related to. But, and, but when I was thinking about it more, I realized that if I think about something that I will I don’t want to think about the last thing that would happen is me not thinking about it.

I will think about it constantly. The one thing that they, that it’s impossible to take from us is our ability to think and our ability to think about what we can think. So the issue is that, do you think the concern is that we will have less thoughts in the world, or is it The concern is that we will feel pressure not to think about it or.

Or that we feel that we can’t think about it, but it doesn’t really matter whether we would think about it or not. Thanks. Those are great questions. Let me start with the first one on thought crimes, which is I like to answer questions by not directly answering questions sometimes, right?

That’s the philosopher in me, or the law professor and me, or whatever it is. But so imagine this. The implicit Association test has been used a lot to try to reveal to people their own biases that they may not be aware of. And the way it’s supposed to work is you could show, for example, people a series of images associated with different phrases.
So for example, you might show more dark faces associated with negative words and just try to figure out implicitly automatically, without people consciously processing it, are they assigning more negative words or negative emotions or negative, feelings toward different, ethnic or racial backgrounds that they’re encountering.

And then that’s supposed to reveal to the person implicit biases that they may not be aware of because they don’t have explicit biases that are the same. And now put neurotechnology in. And one of the things that was interesting about that Alec, who. Paper that I talked about earlier was when I was talking to Alex, he said, it’s unclear the extent to which it’s really mind reading.

Because he was one of the subjects in his own study. And he said, so it could decode the stories that I was listening to or imagining, but not what I thought about those stories that I was imagining. So not the metacognitive layer, just the explicit. So now imagine you have implicit biases and explicitly you have committed yourself to being in opposition to whatever those biases are.

You’ve recognized it in yourself. You’ve committed yourself differently. What is the neurotechnology decoding? Is it decoding your implicit bias? Is it decoding the metacognitive layer and when it’s being used, right? Your writings or whatever you have explicitly chose to write. What’s in your mind is a lot more than just your explicit commitments.

It’s your implicit thoughts and desires, including ones that might be implicitly even, you’re unaware of them, yourself. And neurotechnology can help decode a lot of that. I think that’s different than writings. And your implicit association with different images. These are all kinds of things that can be decoded with neurotechnology.

So it’s not just the direct, it’s what it can get at that’s different. Your second question was about, it was very self sensory. Yeah, self sensory. In the book, I go through a lot of the literature on this, which shows the chilling effect of self-censoring. And you’re right in part like in the moment, if you think, I am not going to think about that, I’m not going to think about that, you might think about that.

But over time there is a dulling and an avoidance that happens, which narrows people’s broader set of thoughts. The fear actually does quell people’s both thinking and what they gravitate to and what they learn about and what they read about, and the avoidance more generally. So I do worry that the chilling effect over time narrows the scope of things that people interrogate, investigate, and think about because they know that those kinds of thoughts are being decoded or accessible over time.

There’s also the flip side possibility, which is we normalize it and we become like it becomes invisible, and that’s part of what I get into in this last chapter, which is. Over time, the risks become increasingly invisible to us. And so the freedom with which we might use the technologies with abandon will make accessible all of those potential thoughts for surveillance and for other kinds of normalization that could put us at other kinds of risks.

So it could go either way, right? It could be a chilling effect. I think it depends in part the kind of regime that you live within. There’s also a possibility that we simply allow it, we ignore it, and there are risks that become invisible and steering our behaviors over time that we’re blind to

Hi.

So forgive my ignorance because I discovered this event today. So if it is covered in your work, I apologize, but it really connected with me. ’cause I have an interest in data privacy, so I’ll have a three part question. You don’t have to answer all of them if you don’t want to. ’cause I know I wanna be sensitive at the time.

But my first one comes from gamification and so I feel eSports is the, common media coverage, but the introduction of video games and the increasing rate at which we can build environments to test and monitor pre people while they’re performing or experimenting in these environments, I feel is a strong avenue for collecting a lot of this data.

Amongst other more common devices such as phones and smart devices. I was curious if you are hopeful on this or if you’re more pessimistic on this avenue. The second was regarding concerns you’ve discussed already, which is by association or by proxy if we are exposed to this.

So 23 and me, someone’s a cousin’s information can expose me or, Nvidia, if you’re on a similar brain length. And what, following that, what steps might someone who is either knowledgeable or ignorant to these happenings should go about addressing this. Either someone who has read your work or someone you know, who is just hearing it for the first time, what message you might have for them.

And then finally the timing of severance. If you have any interesting takes on the show as I feel some of it hits this. Thank you. That’s four, not three. I was keeping them in my mind. All right. So on the first one, which is gamification, you know that to me, I. It is unclear to me how much like VR is really gonna take off and kinda the metaverse is gonna take off, or XR is gonna take off.

We’ll see what happens with the cheaper version of Apple provision, that comes out. And the extent to which these layers actually exist between how much does ar take off? Like Google Glass failed spectacularly a long time ago. Are we finally gonna see a surgeon of that? And if we do I think it’s that combination of all of the sensors that ends up being incredibly powerful as a place to gather all of that data and rich contextual layer data.

Am I optimistic about it? I am I don’t know what optimism means in this instance. Am I optimistic about data privacy in that context? No. Am I optimistic about I don’t know what I’d be optimistic about in that scenario, so I’m just gonna leave it at that. The second question was. Tee me off exposure.

So to a great, oh. Awareness of, it’s, it seems unavoidable to cover all bases, but to some extent, for someone who is knowledgeable or new to this. Yeah. Yeah. So what might they do? I do go into a number of potential options in the book, but I’ll say that the first book really focused on a broad global framework around the right to cognitive liberty and advocating for the right to cognitive liberty.

The second book that I’m writing is really getting into this in more particular detail because it’s the question I got most frequently, which is I’m bought in, right? But I’m not a human rights scholar. I’m not gonna be out there advocating at the human rights level. What can I do individually? And I have a lot of ways in which I think individuals can reclaim their own cognitive liberty, and that also we can individually and collectively work together to try to advocate for a right to cognitive liberty and embed it more more deeply.

We can talk about that during and after wine. Severance, the timing of it, do you mean the timing as in how soon are we gonna reach that new reality of being able to have that or having that imposed on us by employers? I think the direct media dressing of the sort of material that’s being discussed the first episode of the newest season of Black Mirror really directly addresses this.

I, there is a lot of media that’s coming out right now that is starting to address this. And I will say like the number of mainstream writers from all of the major outlets that are sitting in my inbox right now of I’m writing a story about, the, this space. It seems like there’s about to be a huge takeoff in general media of coverage of these issues.
So I think they’re onto it. And they’re onto it because so many of the major tech companies are launching or have recently launched major products in this space. So now I think is when we’re gonna start saying it. I have a very short answer to your second question about what people who dunno anything about this should do.

They should read her book, the Battle for Your Brain. He said he’d already he who was already familiar with it, so I was trying to not do that, but thank you. Okay. They should buy and read her book. Yeah. Okay. Thank you. So this time we have come Okay. To the end and fortunately totally simultaneously with coming to the end of people standing at the microphones, which is a, perfectly planned.

So I ask you please to join me in thanking Nita and Hank for a fascinating conversation

and our moderator. And the next constitutional Law Center event is our annual big conference, which will take place. Not this weekend, but next weekend, on Friday and Saturday beginning at 10:30 AM in Paul Brass Hall. Our topic is Pierce Versus Society of Sisters at 100. So this is the 100th anniversary of that extremely interesting Supreme Court decision.

Pierce versus Society of Sisters was about an Oregon law adopted by referendum that required or that for bad anyone, any parents from sending their children to private schools. So every child there was a compulsory education, but you had to go to the public schools to get it.

And the Supreme Court unanimously held. That was a violation of the US Constitution. And this is so interesting, so for a variety of reasons. One is how did they get there? What clause of the constitution was this about? Was this a, was this a. Freedom of Religion claim? Actually there were two schools involved.

One was operated by a religious order. The other school was a military academy. Is it, does this go beyond religious freedom? Is it some kind of a substantive due process? Pierce has been was cited in Roe versus Wade and other cases about substantive due process and has given rise to all of our modern doctrines about parental rights but also about the right to control lives about matters of intense personal concern.

These issues are as controversial today, especially after the Dobbs decision as they ever have been. And then there are the. Questions about education. Would it be a good idea if all children had to attend the same set of of private schools? What would that do? Would that in increase toleration and understanding, or would that as the Supreme Court put it homogenize our our children?

So this is a case that is extremely rich from point of view of constitutional doctrine from the point of view of freedom of religion from the point of view of educational policy as well. So I just, and we have an all star. A cast of speakers from all over the country addressing these various dimensions of the decision.

And I strongly encourage you after you have read, or maybe even before you’ve read Homage de Catalonia to come to this conference and again, thank you to our speakers.