ElAbogado – CodeX Group Meeting – April 23, 2026
ElAbogado: AI-Powered Legal Lead Matching at Scale
ElAbogado is a Spanish-based legal lead generation platform that connects people with the right lawyers across Spain, the U.S., Puerto Rico, Mexico, Chile, and Colombia. Co-founder Martí Manent and data scientist Veronica Sorin presented at Stanford Law School’s CodeX Group Meeting on how they’ve built what they describe as a state-of-the-art agentic legal lead management system.
- The problem they solve: Most people default to the nearest or most familiar lawyer rather than one who specializes in their specific legal issue — ElAbogado matches users to the right specialist, having helped 2.5+ million people find lawyers to date.
- Scale: The platform handles ~1,500 leads per day (8,000/week), fielding inquiries via chat, phone, and WhatsApp.
- 15 years of data as the foundation: Real legal case data trained their AI models on tone, question flow, case qualification, and jurisdiction matching.
- Agentic pipeline: AI agents now handle inbound/outbound calls, chat, and WhatsApp conversations — qualifying leads, extracting case data, routing to the right lawyer, and following up if a lawyer doesn’t respond.
- Results: ~70% of cases are fully automated; headcount for the human lawyer intake team has been cut by more than half in under a year.
- Guardrails: They deliberately stop short of 100% automation, keeping humans in the loop for complex cases (~30%) and running post-call quality checks.
- What’s next: Pushing automation toward 85–90%, expanding to new countries (now a one-month project vs. one year previously), and potentially entering English-speaking markets.

Watch CodeX Group Meeting with ElAbogado
Full Transcript
Roland Vogl
Welcome everyone to our Codex group meeting. This is our first group meeting after our FutureLaw Conference. It’s been an intense week at Stanford. If you were here, I hope you enjoyed it as much as we did. We thought we had a very productive week with a lot of new ideas that were being shared, and a lot of great people in town. If you weren’t able to participate this year, I hope you’ll make it next year. We will be pushing out videos of our main conference, the main Future Law Conference, in the next couple of days or so. You will be able to watch the videos in our YouTube channel. It’s really great content there, so be sure to check it out.
Roland Vogl: One of the great people who came here and was here for pretty much the entire week is Martí Manent, who’s been a friend for many years, a legal innovation powerhouse in Spain, and whose platform, whose ElAbogado platform, reaches beyond Spain, actually, to many other countries. This is actually a legal lead generation platform that he’s built among other legal tech projects. Martí has kindly accepted our invitation to present here today, along with Veronica, who’s a data scientist at El Abogada. We’re thrilled to have you here today, and excited to learn what you’ve been up to. With that, I will turn it over to Martí and Veronica.
Marti Manent Roland thank you very much for the invitation, and it was a great pleasure last week to be with all of you in Stanford. It was an amazing week, and probably all the videos will be very interesting for all of us.
Thank you for the invitation to explain ElAbogado, and what we have been doing the last two years, because probably after sharing this with a lot of people last week, I think that what we are doing is probably the state-of-the-art in agentic legal lead management.
For the first introduction, ElAbogado is a platform that was built with, we think, a nice proposal, because we think that a lot of people cannot go directly to a lawyer and what he or she does is go to the lawyer that is probably in the same street or close to a friend. Let’s imagine that if you have some health problem and you go just to the person that you have close to you. That probably is not what you are doing. If you have a health problem, you go to the doctor that has the speciality correctly matched to what you have.
But a lot of people all around the world cannot go to a lawyer that has the practice for the problem that he or she has. Some years ago, we understood that if we could just make a match with a lawyer who has the practice that that person is looking for, that would be a nice step to more democratize access to justice. That was our proposal. To do that, we think that we can do it in a big way. We try to do that with technology. After some years, we have helped more than 2.5 million people to find the right lawyer, and that is for us, is a pleasure to know that a lot of people have found the right lawyer to help with their problem.
To give an idea of how many leads we are managing per day, we are probably at more than 1,500 per day. We are managing these leads through… in Spain, also in the States for the Spanish-speaking community. We are the lead platform in Puerto Rico, and also we are in Mexico, in Chile, and Colombia.
To give a little bit of context on our platform — what we do is, when we find a person that is looking for a lawyer, we want to talk with that person, because we need to understand what kind of problem he or she has, what kind of lawyer he or she needs, and if there is a real problem, or if it’s not a legal problem.
In the last 5 or 6 years, we have been doing that with more than 40 lawyers in our team. But this is not scalable, and at the end of 2024, when the LLMs started being robust, we decided to try to build a new kind of solution.
What we understood is that if we could build a solution that can help people 24 hours a day, 7 days a week, through an agentic system, that would be amazing, because we could go to more countries and help more people.
That is what we have built with a conventional legal artificial intelligence model that is named CLAIM. One thing that we are very proud of is that OpenAI and 11 Labs — which are probably two of the leading labs doing LLMs, and also voice LLM, which is a level lab from Europe — have recognized us with some kind of award that we have received, and that, for us, was amazing.
To try to explain more entirely what we are doing here, it’s a pleasure that Veronica is here with me. She’s the scientist that can explain what we have been doing with this platform.
Veronica: I think Martí explained what it is that we do, and I will try to explain a little bit on how we do it. I think Martí already mentioned some of the numbers, so the only thing I want you to get in mind is that we had a really big challenge to try to manage and to respond to all those users that we have, who have a real legal problem and need a fast answer or help. We had to find a way to do it as fast and efficiently as we can.
We had to find a way to treat our users in the best way, and so the problem is: which is the technology, and how we can use the technology that we have at the moment to help the users in the best way. To keep in mind, we have, like, one lead per minute, more or less, and 8,000 or so leads by week, 30K in a month. We had to find a way to find a solution for this technical challenge. How can we respond to our users in the best way, fast, and give them the best specialist for their legal problem?
What we realized is that it’s not just one technology that can do all. We had to merge many pieces to make all this work and give us a solution.
The first thing we have is actually the training data. We have, like, 15 years of real legal cases. We have our lawyers that manage this list, that talk with the people. We have the lead with all the information, the data that is useful to train any artificial intelligence. That’s kind of the base.
Then we have to build, on top of that, with the best tools that we can find, depending on the problem. It could be commercial tools, or any open source tools, and so on. Recently, and we’ll talk a little bit more in a moment, it’s all about LLMs. What is our best LLM? The thing is that we need to test, tune, and find what is the best solution for each specific task that we need to resolve. There is no just one solution for all.
We realized — and this was some time ago, because actually we have been using artificial intelligence for years now — that sometimes we also need to build an in-house solution. We cannot just pick up some tool that someone else has built and plug it in. We actually had to tune that for our product. We have all that data as the training ground, and how we translate that and build a tool that helps us to solve each of the technical challenges that we have.
Here we mentioned Claim — I think Martí mentioned it — and we have also Leaks and SLAB that also give us the solution to other problems. The last part of all this is how we orchestrate all that. How we connect those tools — that could be just commercial, open source — with our tools, and how we use our data from all these 15 years to tune all that and put it all together.
As I said, we started with artificial intelligence some time ago; it’s not just with the big LLM movement.
We realized, as Martí said, that we have users that call us. One per minute or so. That means that you have a bigger, long call queue, and people that need an answer now — people that call, or people that just leave a message. We have all that, and we have our lawyers, who actually have to call them and extract or ask the user all the data that we need, so we can define what the problem is and which is the best specialist for that problem.
At that point, we understood that we had to make that process efficient. We needed a way to decide which is the lead you call first. How do you make all the efficiencies, so that you can give a solution to all of the users that are contacting you? At that moment, we used machine learning, so we trained, with our leads, a model that we also had in-house, to make this process very efficient.
Then, by 2022 or 2023, we had the big wow with the LLMs, so that became public for everyone. At that point, we said, okay, how do we start adopting that technology?
One way, for example, is to try to automatize and make things more efficient. For example, when we have to contact the lawyers, we send the information of the case of the user. The user needs a divorce lawyer that is in Barcelona, with such and such issues. All that, for example, the people just write in an email to send. Why, if the LLM can actually help on that — one of the simple things is we just ask it to make a text summary of the case. It’s automatic. The people do not have to write any email. Things go fast, more efficient. So you start doing this type of things.
Then when the LLMs became even more intelligent, I think that’s when we made the jump — a really big step forward — because now we can actually help people 24/7. People sometimes have problems, and the problems appear at any time of the day, and we are there for them to help.
Marti Manent Let me explain a little bit. In 2019, what we did with machine learning was orchestrate which lead we need to call right now. Imagine that you have 100 or 200 leads that you need to call. The best decision was made by machine learning — that was TensorFlow from Google.
The next step in 2023 was: some of the information that we are going to send to the lawyer — who is our customer; the customer is the lawyer that pays per lead to us — is gonna be built by an LLM. The lawyer that calls the potential customer asks: where do you need a lawyer, in Barcelona, in Madrid, in Miami, whatever; what kind of practice is this? At the end, it was an LLM that wrote the message that the lawyer is going to receive.
But here, we still had lawyers inside our company making these calls and writing part of the documents that we were sending to our customers.
The big jump, as Vero mentioned, is in 2025. That part of all the jobs that our lawyers had been doing, we have transferred to an agent. To an agent that is making a call, an agent that is answering on WhatsApp, an agent that is answering a chat. I think that it’s making the decision to say which is the best practice that that person is looking for. I think that it’s making the decision of, okay, there is a case here that can go on. Also, an agent that makes a decision that, economically, this lead is viable.
Right now, we have probably half of the lawyers that we used to have in-house. In less than a year, we have reduced our task force that was doing the calls by a half. For me, that is very relevant, because that job has been done by lawyers, by people that have been trained, going to a law school, and have some kind of knowledge and make a decision. These decisions now are made by an agent. Better.
Veronica: Yeah, exactly. Just to give a little bit on the hints of what the pipeline looks like step by step: we have the user, the person, the human, that has a problem and contacts us looking for a lawyer. That can be either, as Martí said, a chat — so just typing — or calling us, or even WhatsApp.
Now we have an agent that actually answers the phone, in a way, or talks via chat, and that’s all AI. That agent can converse and has a natural conversation with the user, because what we need in that conversation is to understand what the problem of the user is, where the problem is, and ask the questions that we know will give us the lead, or that we can use later on to validate and say, okay, this is a viable legal case. As Martin said, also, it’s an economically viable one as well.
All that information is what we need to extract, and the agent knows, because we have trained it with the data from these 15 years. We tell this agent what it is that it needs to ask the user, so we have all that information to build this lead.
Once this conversation happens, then it goes to another agent, as well, that extracts all that data. We have a data model, so we actually have to fill that and get all the data from those leads to our database. Then that also goes to another agent where we do all this validation. Is the case valid? The agent knows how to decide if the case is a valid case or not.
Just to go on that part — the agent, once it qualifies the lead, can say: okay, I have everything, the lead is ready. We can just send it to the best lawyer, to the lawyer that this person needs, based on the legal area, on where they’re located, the jurisdiction, etc. It just automatically sends.
That’s more or less 50% of the cases. It’s a pipeline built on AI agents, pure agentic tech. Everything goes by itself.
At some point, also, it could be that it’s not a real lead. We are an online platform, so we receive calls that are an error, or there’s not a real legal issue there. It’s also as valuable to know not to send something to a lawyer. The agent also knows how to decide that the lead is not a real legal case. It also closes leads when that needs to be done.
There is also a third thing that could happen, and the system handles that as well: escalate to a human, which goes to our lawyers. When the lead needs to be handled by them, because maybe the case is complicated enough, or needs more information than we have, we can also transfer to a human when it’s needed.
We also kind of close the loop, because once we send the lead to a lawyer, we want to know if the user still needs help. In the case that the lawyers do not call for some reason, we make a follow-up call. That is also an AI doing that — it’s also another agent — to follow up and know if this person still needs help. If they do, it goes again back into the pipeline, and we help, again, to find a lawyer that helps with the problem.
As Martí said, we communicate with the users through all three channels. The chat was actually the first channel that we started with the AI, because it’s probably the easier one, in the sense that you have time to talk with the user, so the pace is very different, like with a voice call. When the user has to call us, it has to be in real time. We have the latency to take into account, and it’s very challenging. The LLMs are now capable of doing that, because before, at the very beginning, it was very difficult to find a tool that gives you the latency and the intelligence to do that. Now, it’s possible.
Also — and I don’t know if we mentioned — it’s not only the inbound calls, but we also do outbound calls. The channels talk to each other, so if in a chat you don’t get all the information, you trigger a call, and the system also calls the user to get all the information for the case if needed.
We also now include WhatsApp, which is different because the user really sets the pace there. The user can get the fast answer from us when the user is there, but if the user has to go away and come back, the chat keeps listening, and it waits for the user to talk to us again.
To put everything together — I think we can say that our secret, in a way, is to have the 15 years of real legal cases. That’s what we train all the agentic system that we have on. We can help the system know exactly what we need to ask the user, how to talk to the user, which is the tone, how to pace the conversation. At the end, we have what we need — the information we need for that legal case — so we can help them.
Marti Manent Just one second. To not run out of time, I think that we want to show a real call that I think will be more interesting. All this explanation is to do that.
Roland Vogl: Yeah, we can’t hear the audio, that’s too bad.
Veronica: No?
Roland Vogl: Nope. But… We have a mute agent.
Marti Manent Yeah, oh, we can share. I think that it’s in the presentation. You can see here, it’s our page. We share this information online.
What is very relevant here is that we have a lot of challenges. The first challenge was to build the orchestration for agents. The second challenge was to decide what kind of practice. The third challenge: decide if the problem was a real legal problem, and whether it’s economically valuable or not. Also, for example, we had challenges like the latency of the call, because if we don’t have the right latency and the answer goes too fast, then the user doesn’t perceive a correct back-and-forth and says, okay, that’s a machine, I don’t want to talk with a machine.
Anyone who wants to see online can go to eLabogado.com slash lawyer, and you can check real calls. You can see chats and all of this. I think that if the people want to make some questions, now would be the time.
Roland Vogl: Yeah, so I think one question I have is: at what point did you feel comfortable that the agent that you tasked with this triaging task — the one you had human lawyers do before, right — like, is this a legal case, what kind of case is it, and who’s the right lawyer in what jurisdiction to connect this case with? That’s the key functionality, and probably the first functionality you tried to use AI for. How much testing did you do, how much validation did you do, before you felt like, okay, we’re gonna use this now? Maybe you actually don’t have a job for your human lawyers anymore, or you no longer have that job for them.
Marti Manent Yeah, I think that has two answers. One is that we have a lot of cases from these last years, and we can train the machine, and also we can check if the answer that the machine is giving to us matches with the answer that the real lawyer gave previously. That is very relevant. We train over a platform with real cases, and then we pass the same cases through our platform and see the results.
Before we felt comfortable — as you asked — we didn’t put it online. The first thing that we put online was the chat, and that’s also very relevant, because it’s easier to manage a case through a chat — by the technology, not by the information, but by the technology. After we felt comfortable with the chat, we jumped to the voice. This was the step: first, train with all data, train the machine, then pass the test. Once the machine, the platform, answers the same answers that all the lawyers did before with those cases, we put that platform ready online.
The second part of the question: yes, these agents are doing the job that real human lawyers were doing. As we mentioned last week when I was in Stanford, I think that it becomes a tsunami to the profession. For us, like one year ago, it was very strange, because for us it was a real tsunami that was coming to the profession, and the people are not running. We have less than half of the people that had been doing this job.
Roland Vogl: I see, okay. So, now, at this point, can you say what percentage of your team — is this fully automated, this task now with agents already, or do you still have some humans involved in it, even if just for quality assurance?
Marti Manent Yeah, almost 70% of the cases are fully automated. You asked which lawyer we give the lead to. One of the tricky questions was to find the place where the user is, because in the States, for example, there are a lot of cities that have the same name. How you can say that it’s Miami from Florida, or it’s Miami from California, or it’s Miami from Texas? One trick that we found is to ask for the postal code. In which postal code you are — that is the way that you make the match with the lawyer in that jurisdiction.
For the 30% of the cases that cannot be managed by an agent, we send that case to a human, who manages the case. We also make post-checks of the calls — what does it mean? We check if, after a decision has been made by an agent, it is working correctly, and we are just tuning that result.
Roland Vogl: Okay. There’s a question from somebody in the chat, which is: deciding whether a case has merit or not — is that not already legal practice, and therefore might be called unauthorized practice of law?
Marti Manent What I understand is, to try to understand if the case can go on or not — is that the question?
Roland Vogl: Yeah, if a case has merit or not. If you say, like, okay, this is not even a legal case, versus, it is a legal case that you want to match with a lawyer. If you do this automatically, isn’t that already legal practice? It could be, therefore, a violation of unauthorized practice of law.
Marti Manent What we are doing is — our platform is a marketing platform. Our customers are the lawyers that want a new customer for them. What we are doing is helping lawyers, to give the lawyers the lead that they are asking for. For example, if you ask for a lead of a civil case, we cannot give you a criminal lead, because you are not doing that practice.
What we are doing is helping people — as I explained at the beginning — who don’t know what kind of practice they need and don’t know if there is a real case there. What we are just asking them is some questions to understand: okay, you’re asking for a civil case — yes, that’s a civil case. You are asking for a lawyer that makes that practice where? Miami, in San Francisco, or wherever. We say, okay, that is a lawyer that you previously chose.
For example, in the States, we don’t choose the lawyer. The lawyer is chosen by the user. In other jurisdictions, the platform can choose the lawyer. But for example, in the States, the user chooses the lawyer. If that lawyer doesn’t do the practice that the user is asking for, we need to say: okay, that lawyer does not practice physical cases or criminal cases.
Roland Vogl: Okay. Sugaram’s asking: what LLMs are you using to build your agents?
Marti Manent Yeah, nice question. We use all of the big names that you know. Here, I’m going to give you some tricks. We are not going to explain all the platform and how it works, but we recommend making parts of the problem. When you take all the problem in one cake, that is not the solution. You need to make some parts of that problem, and you can solve different parts of the problem with different LLMs. Let me explain a little bit. There are some LLMs that are quicker to answer. For example, if you are making a call, you cannot use, for example, Opus from Anthropic. Probably you need to use a smaller LLM that can go more quickly, and then the latency is lower. That is very, very relevant.
Roland Vogl: Got it, okay. Awesome, so what’s next for you? I mean, now it seems like you have, kind of, this business figured out. That’s an interesting point you made, too, which is — in a sense, what I hear you say is, look, with a new tech, you can do this all quite quickly, right? But it still required 15 years of experience to know what the issues are, what’s the tone, how to place this technology into this business opportunity, right? That’s really where the rubber hits the road for many, right? Theoretically speaking, we all have the tools now to build a flywheel, a machine like this, right? But exactly how to fit it into the world is really where you need the context and the experience and all that, right?
I think that’s one of the learnings from this. One of the questions that comes for me is: where do you… what’s the next thing you want to apply this technology towards? Can you share anything, or it’s all still in stealth, and we’ll learn about it at your next presentation at Codex.
Marti Manent No, don’t worry. I think that we need to move the line up close to 85% of the cases. One of the learnings that we have found is that if you try to put it 100% automatic, you will probably have a big problem, because it’s very, very hard. The actual LLMs make mistakes. They are not perfect. If you try to do a perfect solution, you are gonna make some mistakes, because it’s very, very hard to achieve that — almost impossible.
The first solution here is that we want to push the line up to 85% or 90%, but never to 100, because some cases must be managed by a human. What we are also doing is jumping to new jurisdictions. It’s more easy now, because we have the tools to jump to another jurisdiction. To go to another country, for example, for us, it’s like a one-year project, and now it’s like a one-month project. It’s very, very easy now for us to go to different countries. Another thing is, we are a Spanish-speaking platform, and one of the things that we have on the table is going to English-speaking users, also.
Roland Vogl: Got it, okay. Everyone can look forward to ElAbogado coming to a country near you. There’s global expansion on the horizon. Maybe last question, because we’re over time already, unfortunately — Reem is asking how you deal with privacy issues.
Marti Manent Yeah, that’s a very nice, interesting question. We are from Europe, and in Europe, the privacy issues are very, very interesting. We have a distributed platform, and for each country, we manage the data from that country. We use Amazon Web Service, and also we use providers that can provide service all around the world. That is one of the solutions, but it’s very easy, though it has some restrictions. You need to use the platform that can go for an instance for each country.
Roland Vogl: Got it, okay. Cool. Well, let’s… oh, hold on, so Salih has their hand up. Sadi, you wanna unmute yourself and speak up?
Salih Tarhan: Yeah, Salih, yeah, correct. I need to give some context, that’s why, actually, I raised my hand. Thank you guys for the presentation, it was great. So, while I reviewed your website, actually, I saw that you are probably targeting the U.S. attorneys as well.
My question is — we built a platform helping attorneys to find attorneys in other jurisdictions, in other U.S. states. What we saw is that there are a lot of problems on ethics and professional responsibility. You cannot market, you cannot feasibly vet the attorneys in some states. There are the ABA model rules, a lot of highly regulated areas. I think you plan to extend your reach to the U.S. Do you have a plan to navigate this complex area of the law? Do you see any problem in the future for that?
Marti Manent Yeah, there are some countries that we cannot go to, because the lawyers cannot make any kind of advertisement, and it’s like the old school, closed market. I think that that is not good for justice. I think that it’s good for justice that the users, the consumers, can see different kinds of lawyers, and the lawyers can advertise and explain their practice. Because if the lawyers cannot explain their practice, that is a closed market, and I think that it’s not good for the competition. So, there are some countries that we cannot go to.
Roland Vogl: Yeah, so I think, yeah, there’s some… there are several platforms that have been trying to resolve that in the U.S. I could probably try to connect you with some people who’ve been working on that, if you’re interested in that. I think one of the questions is: do you take a share in the legal fees that are being generated by this lead? That’s a different proposition and probably harder — it probably won’t pass the fee-splitting rules in the U.S. Whereas if you have a law firm just paying a general subscription fee or something to a service, that’s more likely just fine.
Well, anyways, I think this was a great conversation. Interesting also to see how a legal tech provider that you are — from the pre-GPT days and pre-LLM days — well, you started already in 2019, as you shared, with machine learning to help with the matchmaking, and then really also leveraged the LLM capabilities in your business. Also, how that affects your headcount and how you think about that. I think that’s been really instructive, and I think it’s instructive for a lot of legal tech players who’ve been around for some time, and maybe some new entrants, too.
That was a really rich conversation. I really appreciate you, Martí and Veronica, for staying up on this special day. I know Barcelona is like a holiday today, and it’s already late at night, so really thank you so much for being with us here today. On behalf of everyone, a quick round of applause. Thank you for joining us, thank you for everyone in the group for tuning in. Look out for our Codex Future Law videos, which will be coming out soon, and I will be sending an update on our next meeting in the near future. Alright, well, good to see you all. Thank you very much.