Tower and PACIFICA – CodeX Group Meeting 11.20.2025
CodeX Group meeting featuring two presentations:
Tower, an AI-powered data room platform for M&A lawyers that automates document review and project management in due diligence processes. Tower helps law firms organize and analyze thousands of legal documents through automated extraction and reasoning capabilities.
The Platform for Immediate and Final Self-Composition of Administrative Conflicts PACIFICA, a Brazilian government platform that uses AI to accelerate maternity benefit eligibility checks for vulnerable populations. PACIFICA combines AI-powered information extraction with rule-based legal logic to resolve benefit disputes in 38 days versus 2-4 years in courts, achieving an 86% settlement rate.
Both projects demonstrate practical applications of AI in legal workflows — Tower as a commercial venture serving corporate law firms, and PACIFICA as a public service addressing access to justice issues in Brazil’s overburdened legal system.

Watch Tower and PACIFICA presentation.
Transcript of talk
Roland Vogl: It is my great pleasure to introduce our speakers today. We have Andy Zhang, who’s the co-founder and CTO of Tower, which is a new agentic data room. And he’ll be sharing a little bit about Tower. And then we have Fernanda Suriani and her colleagues from the Brazilian government who will introduce us to the PACIFICA project, which is an AI-assisted platform that accelerates eligibility checks for maternity benefits. So two quite different projects that we’re going to learn about. We’re excited to have you.
Before we start, I have Marshall Silva here today, who is a new member of the group. Marshall, you want to introduce yourself quickly?
Marshall: Thank you. Yeah. I’m building what we would call a triage law firm to help accident victims in the US find the right law firm that will actually take good care of their case and hopefully better, better care of their case and the billboard lawyers that most of them, or many of them, end up working with. So I’d actually be very interested in meeting Bruce or other people who are interested in access to justice-related challenges. I think it ties very closely with what we’re doing.
Yeah, Marshall, maybe, you know, one of our next meetings you can come in and, you know, talk a lot about your platform. It seems like a really cool project you’re working on.
Marshall: Thank you.
But good. Well, with that, I think I’ll just turn it over to Andy. And, you know, we give each group 30 minutes, but if you could leave about half of your time for Q&A, that would be great.
Tower – Andy Zhang

Andy: Great meeting everyone. Love how everyone’s working on very altruistic things. We’re building a data room for M&A lawyers, and the idea is that our data room helps manage your request list. And we also automate a huge chunk of due diligence. We went through Y Combinator last year, and now we work with 30 law firms in the states, seven sister law firms in Canada, and tech $1 billion-plus acquirers. So these are companies that buy other companies and hold them.
I think for the purposes of today, I’m just going to go through what the workflows currently look like and then go through a quick demo of our product before leaving it to Q&A. But if folks have questions during the presentation, please don’t hesitate to interrupt.
Current Buy-Side Workflow
So currently on the buy-side workflow, this is what the process looks like. And for folks who aren’t familiar with corporate lingo, buy-side is basically when a company buys another company. The company that pays is the buy side. So when investors invest in a company or when an M&A company buys another company, they’re considered buy-side.
When the deal kicks off, when the due diligence kicks off, you typically start with a request list. And a request list looks something like this. It’s typically a Word or Excel document that has a list of assets that the buyer is asking the seller. So the buyer would want to see financial records, employment agreements, customer contracts, and the like.
Typically, this process starts with the buyer sending the request list to the seller, and the seller will go back and forth between the two over email and dump a huge chunk of documents. Typical deal documents range from thousands of documents per deal to hundreds of thousands of documents per deal.
And when the buyer gets those documents, they have to go through loop and their associates from different parts of the firm, such as HR associates or environmental associates. And they have to review these documents. And our review process can be very lengthy. But at the end of the review process, they still have to go back to the request list, figure out what things are still outstanding, add in the follow-up request, and then repeat the same process over and over again.
I think the two main takeaways that folks should take away from this is that manual review is like a huge time sink, and a lot of deals are actually unprofitable for a lot of law firms because of the manual review. This is especially the case with smaller deals. So a lot of venture deals, for instance, typically go over budget because of this, and the firms could take a loss.
And on top of that, a lot of folks tend to focus on the document review part of the workflow, but the project management is equally as painful. We spoke with hundreds of lawyers, and there’s some really creative ways that folks keep track of things, but it’s still highly manual and it’s just an unpleasant process overall.
So the two main pain points in this due diligence workflow is the doc review and, let’s focus on, but still very severe is the project management. And this is where Tower comes in.
Tower’s Solution
So in the Tower workflow, everything is done through the Tower platform, including responding to the request list if possible. And also we have AI tools that both help organize and review bulk documents within the data room. So Tower takes the brunt of the project management, and we can review hundreds and thousands of documents at the same time.
So let me quickly show what this looks like.
Product Demo
So if folks are in the corporate world, they’re probably very familiar with this view. This is a typical data room view. You have your different folders, and within each folder you have files. But before we dive into any of the AI features, I want to quickly talk about the project management piece, which is the request list.
So currently you can create a request list either through the platform by uploading your own request list, which I’ve done here already. We parse out your request list with AI, or a lot of firms have templates. So if there are typical templates that folks like to use to get the V1 request list up, we can take it within the platform.
And within this request list, you can invite your collaborators or your respondent, and you can assign to different people. So this is a much better way to keep track of the requests back and forth. Excel or Word docs.
And there’s also a history of the request list. So typically in a lot of deals, you’d have to version the request list. So for folks in the corporate world by any chance, or formerly? Okay, I see Bruce’s hand up. So you’re probably very familiar with this, or your associates are probably very familiar with this. So this is a huge pain because it’s a lot of manual back-and-forth email threads. Hard to keep track of everything. So we just have everything built within this platform.
And on top of that, whenever you create a request list within our platform, we’ll create folders corresponding to that request list. And we’ll organize the data room according to these folders. So if you upload documents or a dump of documents, more likely here, we will actually go through and will read the documents and organize them according to the request list. So this saves a lot of admin time up front in these deals.
So let’s take a look at what these documents actually look like in our platform. So also for organization, we have reasoning. So you can double-check that everything is kosher before you just accept it. And within the documents themselves, we rename everything based on a precedent. And we also give a quick summary.
A lot of folks, because we work with such large firms, they typically have international operations, and a lot of the folks find it really helpful for non-English documents. So a lot of folks will be able to go through Spanish documents or Hebrew documents and use our AI to basically summarize and rename back to English. So that’s the gist of the main data room.
And let me show you the review feature. You know, we were getting some background notes from someone. If you could put yourself on mute, that would be great.
Yeah, thank you, Roland
So for the review features, you create a review table. You can select as many documents as you want. So for these, I’m just going to select the six customer contracts. And here are the terms. This is based on all of our iterations. This is the latest version. So people can predefine terms. These are all pre-baked, or you can add a custom term.
So, Bruce, what would be a term that you would ask your associates to look for in these deals for customer contracts?
Bruce: If you sell customer contracts, let’s say expiration dates, expiration, or license, license conditions.
Andy: Okay. So I think license conditions is a term that we don’t have baked. So you can add a custom term and then generate a prompt to ask the AI. So this way, lawyers don’t have to be prompt engineers. They can still get a very thorough prompt without having to go through the effort.
Bruce: Licensee shall not, and then there’ll be lots of terms that they can’t use, whatever the IP is.
Andy: Exactly. So it’s a license restriction, essentially. Perfect. So once everything is selected—oh, by the way, we also have some workflows, so you can save a lot of these terms in just one click. And this is really helpful for the junior associates who have to do the bulk of this work. And I can start extracting.
This is just going through to analyze every single document, extract the terms, and also reason over them. So that’s the difference between a tool like this versus a more traditional machine learning tool that folks might be familiar with, that’s based on pattern matching.
So what does reasoning mean in this connection?
Andy: Yeah. So reasoning means that the model’s trying to understand the context of the extracted passages, not just the pattern match. Are you folks familiar with tools like Kira?
Yeah.
Bruce: Yeah, I haven’t used it, but I know about Kira.
Andy: Yeah. Yeah. So Kira is probably one of the most popular doc review tools right now. And what they do is pattern match. So they take your phrase and they try to find the passages within the document. We go one step above, and now we try to understand the context of the passages before we surface it. So we’ll catch a lot more stuff that just general pattern matching won’t catch.
For instance, change of control. When you look for change of control clauses, there are some assignment clauses that are technically change of control. If you just pattern match change of control, you may miss those assignment clauses, but we’ll be able to catch it because we understand the context of the entire document. Does that make sense?
Yeah.
Bruce: Hey, Bruce, I see you’ve got your hands up, but just because you’ve called on me in the front row, I thought I should give you, throw you back a question, which is assume that you found the license restrictions and you’re buying a company, and you’re trying to figure out how much of its revenues are coming from using license software or a license product of some kind and might be at risk. Thought that amount of those revenues from the way that they are using licensed IP, how could you interpret that data?
Andy: Right. So we actually have single document AI as well. So let me just pull up like, do I have any IP agreements might have some license stuff. So you can ask it questions like you would ask ChatGPT. But the difference here is that everything gets sourced back to the contract. So the question was, are there any license restrictions on this contract? If so, what would be?
So now our agent will be able to read through this document and come up with an answer. This is helpful for single document review. So when you have a specific question for one document, this is a quick and dirty way to get the answer.
Okay, so it says—so there’s a field of use restriction in 5.7. And by the way, everything is sourced back to the contract, which we consider the source of truth. So it won’t hallucinate anything. So just quickly reading through this. Yeah. So it doesn’t seem like I mentioned revenue in this contract, but it does mention several restrictions and some carve-outs, it looks like. So this is just a quick and dirty way to review one document.
And going back to the table, I think it should be done now. This is what the end table looks like. We have the answer on top. Again, this is sort of the reasoning piece. It gives you the answer directly instead of just pattern matching. And there’s also the reasoning. And we added this because we’ve got a lot of feedback from big law partners. They want their associates to learn. So this is just a way to teach their associates about why the I returned this answer.
So the idea is that typically associates have to go through this manually. This is just getting more advocates for them so they can, you know, hammer through hundreds of these terms at once. And we also have, of course, the citations, which is the most important part. And we also built in project management features for folks to review these documents quickly, double-check everything.
And this whole flow is designed to be AI-driven by human review. So humans would still go through and review everything. That’s something else that is very important for big law firms especially—it’s the fiduciary duty of reviewing everything before passing it off to their customers.
Audience Member: And I guess, you there, right? I mean, in this kind of contextual search that you showed before, how do you verify that your algorithms retrieve, retrieving all the relevant clauses? And how did you train that model?
Andy: Yeah. So we have an agentic workflow behind the scenes, so it’s not quite like fine-tuning any models or using proprietary models. It’s more so chaining a series of models in a very small and complex workflow. So behind the scenes, we actually expand before we extract, and then we compress. So it’s a lot of—forgive the technical jargon—so it’s a lot of like map-reducing.
And the way we’ve done the accuracy, it’s just I have brute force in here. My co-founder is a corporate lawyer. He’s practiced for half a decade. So he’s gone through all of this work, and he probably reviewed thousands of documents for us in building this tool.
Tower Agent Demo
I see we still have a bit of time left, so I’ll show the Tower agent. What this does is give direct deliverables from this table. So a lot of times, we’ve noticed associates would dump hundreds of documents in here. They would review them. But the issue was it’s hard to synthesize findings over hundreds of documents. You can actually ask our AI to synthesize the results.
So let’s take a look. How many of these? Since less than four.
And what this does is that it reads through this entire table and it summarizes the results. The way that we’ve seen folks use our tool, especially the multi-doc search, is to verify the works of juniors. So a senior associate or partner would reanalyze the documents that junior to analyze, to double-check that everything’s kosher. We’ve also seen people use this for documents of different languages. So we get a lot of overseas deals, and people will be able to analyze the different languages in English.
Q&A Session
Bruce Cahan: I don’t want to crowd out anybody else asking questions, but as the corporate lawyer that I was and still am, I guess, I wonder how you can concatenate the results that you get with this document review to recommend a holdback in, let’s say, the purchase price for the company based on the risks that the document review reveals.
So, you know, in my hypothetical, there was some issues as to whether some parts of the revenue would be at risk if the licensor chose to be aggressive. How would the platform give you some range of holdback on price or estimate the dollar amount of the risk based on looking at all the due diligence, or is that solely left to the corporate law firm to negotiate?
Andy: Yeah, that’s an amazing question, Bruce. To summarize, it’s a little bit of both. So if everything’s within one document, you can actually extract the different terms here and ask the AI to analyze it. But if it’s multi-document, that’s not something that we currently support. So if you have one document with multiple amendments, you’d have to combine them into one document before you upload it to our platform. Or as you mentioned, the lawyers would have to figure that part out themselves.
The idea for the multi-doc extraction is to basically find the terms that a lawyer would use to conduct their analysis, but the lawyer would still handle the strategic—like they would determine, “Hey, is this risky or is this material,” etc. So we don’t make the judgment for them. We just provide them with the tools to make the judgment.
Roger: Hey, Roger. Well done. Is, former corporate lawyer that has papercuts still after 20 years doing this work mind-numbingly. How do you deal with the process of confirming missing pages, missing exhibits, missing signatures? Do you do a search to make sure you’re getting the right documents and that there aren’t any imperfections in them?
Andy: That’s a fantastic question, Roger. It’s actually one of the biggest asks from our customers, and we don’t have it in this build, but in our next build, whenever you upload a document, we automatically check for expiration, missing pages, and missing signatures. So it’ll come up. Right now we just flag duplicates, but in the future, we’ll also have other issues here, and people will be able to select them. We’re piloting that with a couple of customers right now, but it’s not in the main build.
Roger: Okay. Just remember to add missing exhibits, missing service.
Andy: Yeah, I think it’s an interesting problem to solve, actually, because it’s not just semantics. So it’s not like there’s a pattern you can match. They actually comes with understanding, which is why it’s interesting to solve with our LLMs versus a traditional pattern matching.
Roger: Yes. And it’s so critical, you know, as a lawyer, the first thing you have to do is to make sure you actually have every document in your closing binder. The lawyers deliver you a single PDF of 500 pages in it. And if you’ve had an amendment in six months, you go back to the original and it’s not updated. And so lawyers—I was neurotic—you know, do I really have the final version? Oh my God, there’s no signature on this page. Where is it? Real? So it’s really, it’s a huge problem. You can’t give good advice with bad documents.
Andy: Yeah, exactly. Yeah. I think you’ve nailed the issue on the head. I think going back to this, a lot of folks focus on the review documents part, but equally painful is getting everything set up. And I think I’m sure the real corporate lawyers here have probably experienced the pain. It’s not as sexy of a problem as the doc review, but it’s still a problem. And that’s why we’ve built 50% of our product for the project management and 50% for the doc review.
Roger: Hey, I’m helping clients upstream of the mess that you’re dealing with, where we have an enterprise capture system where we capture every single document that comes into the enterprise, tag it, organize it, validate it, and file it into an internal VDR and assign individual access rights across the enterprise so only people see as they need. So by the time one of my clients gets bought by one of your clients using the system, everything it goes into your platform will be validated, accurate, and completely actionable.
Andy: Yeah. That’s fantastic. Sounds like you’re already one step ahead of the game.
Roger: Well, it’s an interesting game.
Laverne: Laverne Bird wanted to follow up on your, on how your co-founder sort of brute-forced the system and how you sort of verified the outputs in this agentic process. And if the system’s able then to learn from initial human input, you know, in this process.
Andy: We don’t have training. And that’s something very explicit because our customer base—our enterprises, enterprise law firms, enterprise M&A shops—so they find that they’re very territorial about IP. So we’ve made a very explicit decision to not have a reinforcement loop within our product itself.
So what I mentioned with Noah, my legal co-founder, going through thousands of documents and hundreds of terms is that we use those—like the data set that he’s built—to test our product as we iterate. That’s the only, like, quote-unquote “training” that we do. Other than that, we don’t use any reinforcement loops to improve the agent based on customer usage.
Roland: And you’re going to say a few words about where you are as a company at this point. So you said you have, you have a working product. You have a bunch of customers already. What’s the journey going to look like going forward? Is this sort of the, you know, you’re going to master this, the world of due diligence, or other adjacent products that you foresee? What’s on your mind? Are you funded, you know, already generating revenue?
Andy: Yeah. Well, we’re generating revenue. We went through Y Combinator, and we’ve raised some money from eager customers. So a lot of our customers have angel checks, but that’s the kind of fundraising that we’ve done. I think the goal is to fundraise probably in December and start scaling.
Yeah, it’s been pretty tough because we’re a small team to both handle the customer success globally and also ship fast enough. So today, for instance, I had European calls, and during the day I’ve been with our North American customers, and in the evenings I’ll be dealing with the Aussies. So it’s tough for me to find time to actually ship product. So it’d be nice for me to offload that to some talented engineers, hopefully, so I can focus on the customer piece.
Roland: Gotcha, gotcha. Gotcha. Okay. Well, good. So I think, you know, was one other question. Was, you know, if you can do it across different languages too.
Andy: From Scott and one question from a bit, if you’re using multi-models to arrive at the final analysis.
Yeah. So multi-languages supported. I’m not sure if I have any multiple language contracts in here. Yeah. If folks have a multi-language contract, I can actually upload it to the platform right now and just show how that works. But if not, we do handle multiple languages, and we do use a variety of different models for different things.
Roland: Okay. Your business model is sort of subscription, or is it based on the amount of documents uploaded, or how does it work?
Andy: Yeah, it’s a little bit of everything. Because the customers we deal with are so large, they typically want custom pricing. Just for some context, typical VDR pricing is per deal with a gigabyte limit. And that’s the model that we try to do. But some of the smaller customers where we’re customers in pilot, they prefer to have a per-seat model. So we have that as well. But typically there’s a limit on the amount of documents they can upload. And then it’s either a per-person or per-deal model.
Roland: Got it. Okay. I think, you know, I’m not sure you answered the question from a bit on the on the multi-models, but I use multi-models for the final analysis.
Andy: Yeah, multiple models. That’s actually one of the toughest parts about building this product from an engineering perspective is that when people dump thousands of documents, it’s very hard to find enough compute to actually analyze all those documents. Yeah. Like Claude, GPT, Gemini, all the usual suspects, because certain models are good at certain things. So that’s why we both spread out our compute over different models and also increase our performance by using different models for different things.
Roland: Okay. Well, great. So, well, thank you so much, Andy. And I think you shared your email already. Maybe you can put it up again or put it in the chat.
Andy: Thank you. Yeah. Thank you so much. Thank you for sharing.
Roland: This is great. Round of applause to you and, yeah, all the best. And you’re, so you’re located in Toronto. Where were you?
Andy: Yeah, we moved back to Toronto because it was just kind of a pain flying to New York and Toronto from for myself. So we figured we’ll sacrifice the weather. We’ll deal with the winters, but we’ll be closer to our customers.
Roland: Well, okay. Well, good luck with the winter. And then please keep us posted.
Andy: Yeah. Thank you. Just a quick plug. If folks know any innovation people or are innovation people at a big law firm, I’d love to love to chat with you.
Yeah. Maybe some in this group. So yeah. Please connect with Andy if you need an agentic data room. And yeah. Thank you again, Andy. Appreciate you sharing the story of Tower with us. And yeah, we’ll turn it over to Fernanda, Ricardo, and Andrea now to learn a little bit about the PACIFICA project.
PACIFICA Presentation – Fernanda Suriani

Fernanda: Okay. So good afternoon, everybody. Thank you. Thank you for inviting us to be in this Codex meeting today. I think our talk will be slightly different from the ones you’re used to here in these sections. We’re not gonna talk about business models, profit, revenue, since we’re government representatives. We have different success metrics. So we are seeking for citizen smiles, granted rights, and social impact.
So let me give you a little bit of context. Okay?
Context About Brazil
So let me give you a little bit of context about Brazil. We are one of the ten largest economies in the world in terms of GDP. But we still have profound social inequalities. So we have, for instance, 4 million people living below the poverty line. 3 in 10 Brazilians has functional illiteracy. They don’t know how to do simple math or interpret a text. It’s only a million. They leave without piped water in their homes.
So there are millions of honorable people, citizens, that they don’t know their rights, and even if they know, they have several barriers to access them, to claim for them, which means that these inequalities are also reflected in our own justice system.
We have plus more than 85 million thousand cases going on in our courts nowadays, which take 2 to 4 years to be resolved. We have the largest backlog in the world. And if we take a look actually in the research made by the World Justice Project, we found out that of 6 to 9% of Brazilians that have perceived a civil legal issue in the last two years, only 1% of them have access to justice.
But why is that? If we take a look in the taxonomy of the parties that we have in our legal cases, we will see that 25% are repeat players. And the biggest one is the Social Security Agency.
Now imagine a girl, mother in Brazil that has her newborn baby, and she asked for her maternity benefit, and it is denied by the Social Security agency. What actually are her options, you know, navigate the judicial system? It’s going to be overwhelming and almost impossible. And in essence, even if she starts a lawsuit today, until she gets her benefit, her newborn will be already talking and running around.
PACIFICA Solution
And that’s why the Attorney General’s office launched PACIFICA, which is an online dispute resolution platform that aims to deliver access to justice in a more efficient fashion. And also, we are collecting data to improve our policies, our public policies. I mean, we want to understand the cause of roots of the conflict and to prevent it proactively.
The platform is automated end-to-end. So since the request until the enforcement of the agreement, we have automation on it that makes sense in the legal justice. And we’re trying to make it with a user-friendly platform so that the citizens can help themselves to navigate and to resolve their conflicts.
It’s important to note that the pilot—I mean, the pilot project we are launching with Brazilian of the Public Defender’s office—but the platform, the goal of it is in the future to resolve any kind of conflict between the administration and the citizens.
How PACIFICA Works
So let me explain a little bit how it works. So that mother that has her benefit denied has to go to her mobile phone, has to give some basic information without the need to repeat details or to download documents again that are already given to the agency. And we produce a small video that, actually, because of our public is low educated, so we have to have videos, tutorials to show them how to use the platform and to empower them to use it. I’m going to show a few. It’s very quick.
[Video plays explaining PACIFICA platform]
Video: “Did you know there’s now a quick, easy, and secure way to request a new analysis of a denied benefit? PACIFICA, a platform for immediate and final self-resolution of administrative disputes, is a digital service created by the Attorney General’s Office. It is a fast and secure way for anyone who’s had a request denied by a government agency to negotiate directly with the administration to fulfill their previously submitted request.
This approach avoids the need for judicial proceedings, allowing for a simple and fast resolution. Initially, the platform will handle dispute resolutions involving Social Security issues with the INSS, National Institute of Social Security. Later, the services will be expanded to other agencies in other areas of the federal administration.
See how simple and easy it is? First, go to Pacific Engber, then log in to the gov system with your CPF and password. Click request here. Now just fill in the requested information: the state where you live, the child CPF, your CPF, the number and the benefit that was denied, and the date of the request. Oh, and you can find the benefit number in the letter denying or accessing my benefits at my INSS, National Institute of Social Security.
Then you need to click on the terms of the agreement. Scroll down to read them and click the button to agree to the terms. You can now track your request through PACIFICA. The review period is 30 days. If an agreement is reached, the new authority of attorneys for it sent to the INSS, and you will receive payment information within 45 days after this period. You should contact the INSS if the matter cannot be resolved through PACIFICA. The result will appear in a follow-up, and you can pursue other options.
Please note, at this time, only the Federal Public Defender’s office can submit requests through the platform, and it will only review maternity pay requests from special beneficiaries, i.e., women from rural areas and traditional communities such as rural workers, artisanal fishermen, indigenous women, and Quilombolas, descendants of Quilombolas.
PACIFICA was created to accelerate the recognition of rights, ensuring greater dignity for citizens and reducing legal proceedings.”
Technical Implementation
Fernanda: Okay. So let me explain how it works from inside. What the request is made by the mother. So the workflow’s automated workflow to start. It would so the family information in government database, and it will try to search for impediments for the agreement.
If no impediments were found, then it starts searching for evidence. The evidence, they were predetermined by legal criteria from the Attorney General office. And the system generates this document that you can see over there, which the analysis, with the source they used to analyze the impediment or the evidence and the little bit of reasoning about how, if the impediment is true or not.
So in this way, we have a little bit—we have the system being auditable and also explainable in a certain amount. And it’s also worth to know that the settlement terms were predetermined by the expertise that was developed in the legal cases from the Attorney General’s office. So the prompts they embed in the system, all those expertise.
Once that document is generated, then the workflow automated generates a task to the attorney and includes tags while showing if there is an impediment or evidence. And the attorney will then review all of it and see if it’s a case or not of agreement.
In case we have an agreement, then the system will communicate with the Social Security Agency via its API, and the agency will enforce it also in an automated way. So the implementation, the enforcement of the agreement is also automated.
Results and Impact
And our first results are very reassuring because we have an average of 38 days to the whole process, since the request until the implementation of the benefit and the payment. So comparing to the 2 to 4 years from the courts, it’s very good. And also we have 86% rate of settlement, which is more than double that of courts’ results.
In terms of potential impact, only last year we have 172,000 of cases in courts that see the criteria to be analyzed in PACIFICA, which means that we have the potential to sweep courts for more complex steps, and at the same time, giving citizens faster, more efficient, and more accessible way to resolve their conflicts.
And actually, this is that mother that had her benefit denied. Now with PACIFICA, she will have her benefits, file her claim, see documents, and it receives her benefit from her home with a few clicks in her mobile phone while she’s still breastfeeding. And actually did.
Sorry, this has happening, and I would like to share with you some testimonials of the lawyers and other citizens.
Q&A Session
Roland: All good, but that’s great. What a cool project. Do you have a sense whether those benefits were initially denied due to sort of mistakes that the government agencies made, or due to the fact that, you know, the forms that are available, so complex that these populations couldn’t really effectively use the forms, and you’re sort of helping with PACIFICA, helping making those forms more accessible?
Fernanda: Actually, we have a bit of both, because what happened is that in the agency, it has her own rules, and it’s very strict rules how to analyze those cases, especially for the very vulnerable people. They don’t have much documents, and they have to present a lot of documents, evidence to prove they are from rural areas, indigenous people, or Quilombolas. They are the descendants of the African slaves. So they have to prove, and they don’t have much how to prove it.
And mainly the problem is this: what happened is that our reasoning is different from the agency. What we look for is how the judge will decide that case. So the judicial risk. And we know that the judge are not so strict in analyzing the case. It has a more holistic view of the person, of the family. And that’s what we do.
So instead of looking for, like, positive documents, we try to find out the absence of impediments. So we have a bit of different reasoning and how to analyze the case. And that’s why we have a different approach. And we have a better settlement rate.
Roland: So in a sense, so you were studying the cases, you know, the court cases in these maternity benefits cases, and you looked at what arguments were successful in front of a judge, and you sort of trained PACIFICA to kind of like, you know, to help people make those arguments, right? In a sense, you know, like you’re guiding them through a process based on your knowledge about successful cases. And then you invite people who are using it to provide that information.
Yeah.
Yeah, you go ahead.
But the interface just showed, like, you know, people just had to like, fill in like the Social Security number. And, you know, the state. They weren’t really asked to make an argument or something, even not a guided argument. Your system would just make that independently on behalf of your users. Is that right?
Fernanda: Yes. What happened is that we have access to a lot of government databases, which includes like social information about welfare, information about them. So since we have access to that, we can, you know, process all this data and understand what kind of people they are. And that’s how we use—and also we have the access to all of the database of the agency, of the Social Security Agency. So whatever they have already shown to the agency, we have access to, so they don’t have to, you know, replicate and restate what they have already done.
We simplified it the most that we can. We just ask for the information to individualize the case, to see—because these moms, usually they have three, four kids. So which kid are you talking about? And you know, who are you? Where are you from? And then we can collect all the data we need to analyze the case in this government database.
Okay. Cool. Let me see. Okay.
Roland: So Alexandra is saying wonderful work. And what is the most common reason for initial benefit denial? I think you talked about that already a little bit, but maybe you have some more examples. And how much attorney involvement is there in reviewing the AI analysis of the claim before resolution? And has claim resolution data helped improve the initial claim processing?
Fernanda: Yes, we actually, we have all the human oversight in this pilot. We are taking 100% of the cases and analyzing it. We have a dashboard to analyze the accuracy of the AI, and Ricardo is the one that developed the prompts. So he is constantly monitoring and improving it. And what—sorry, what was the other question they asked?
And so, and we have a very good—do you want to talk a little bit about this?
Ricardo: Yeah, sure. Thank you for the invitation. To answer this question, I need to explain a little bit the architecture of the PACIFICA. There is no bias in that file. In the process, we use AI in a different form, a very different form. AI doesn’t have the power to decide anything for us. It’s just helping us to analyze the cases and characteristics of the people that request the benefits.
We decided to use generative AI in the first, but we didn’t want to simplify, throw the LLM on them with the problem. And while we all know that generative AI is probabilistic and it can hallucinate, and it’s not good for a strict rule enforcement, so our question in the design of the system was: how can we use probabilistic tools like LLMs to analyze legal documents without losing the need for deterministic and auditable results?
The answer is this: we create a kind of hybrid model, a hybrid architecture, and we separate two workflows into different engines. We have observation engine—the LLM—and reasoning engine, that use, like, rule-based logic based on our expertise and, you know, analyzing this kind of case as attorneys at law.
And the observation engine uses GPT, not to make legal decision, but only to extract some kinds of information that’s relevant to this, to analyze the case. It reads natural language documents, reports, statements, data about the worker. It transforms these messy data, this messy human text, into structured data that usually in JSON file. It identifies dates, evidence of rural work, and records of employment. Just this. So AI, it’s a tool to extract information.
In the second moment, after the facts are extracted, the LLM leaves the process, and we move to a second phase. In this phase, Brazilian social security laws and our settlement rules are written as explicit deterministic code. And the system applies these rigid rules to selected facts. The structure is very clear. We analyze if there is rural activity. And rural work is found if, based on all this kind of if-clauses, we conclude—the system concludes that the benefits should be granted or not granted. It helps the attorney at law that work in the case to decide at the end. But we not give this system the agency to decide. It’s the attorney at law for this moment.
So we always have AI review, and we don’t observe any kind of bias because we use AI only to extract information from unstructured natural language in documents.
Roland: Cool. That’s super interesting and helpful to understand that architecture. And also a nice example of bringing, you know, rules-based AI together with gen AI. That’s great—we’re thinking a lot about that.