- The Musings On AI
- Posts
- 🌻 E44: Generative AI as a solution for Information Extraction?
🌻 E44: Generative AI as a solution for Information Extraction?
The answer is yes and no both.
🌸 AI: power to the people
So Ethan Mollick brings the news that huge percentages — proven by research — of people use AI in the workplace now as a matter of routine, and that holds for both Europe and the United States, and holds for all walks of life: lawyers, marketeers, programmers, you name it.
Clearly, it should also work for information extraction specialists such as me. LLMs are a solution to everything! There are simply not enough problems for LLMs to solve, we are running out of problems!
In any case, LLMs are the solution.
Whiskey Investing: Consistent Returns with Vinovest
It’s no secret that investors love strong returns.
That’s why 250,000 people use Vinovest to invest in fine whiskey.
Whiskey has consistently matured and delivered noteworthy exits. With the most recent exit at 30.7%, Vinovest’s track record supports whiskey’s value growth across categories such as Bourbon, Scotch, and Irish whiskey.
With Vinovest’s strategic approach to sourcing and market analysis, you get access to optimal acquisition costs and profitable exits.
🌸 LLMs for information extraction
Over the last couple weeks, I’ve been working a problem of extracting structured information from unstructured information sources in the financial field, for longform.ai. Great company, interesting problem. Say, I have a bunch of texts that talk about all kinds of aspects of start-ups, businesses, investors and so forth. How do I structure this data?
A shop in Basel. It exists.
The obvious first answer is: use LLMs. The answer is not wrong, but also not exactly right.
There are several reasons why that is so.
First of all, in spite of appearances, LLMs are not good at everything. Currently.
As Derong Xu et al. documents in the excellent overview article “Large Language Models for Generative Information Extraction: A Survey”, discriminative models of the previous generation of LLMs beat the generative models of the new generation across the board for all information extraction tasks. To be precise, unless one uses fine-tuned models or models prompted in specific ways — then the performance starts to become on par with discriminative models.
These perhaps unexpected results have several causes, which are all obvious if you think of it.
First off, discriminative models were designed to discriminate and give right/wrong judgments at any given time. If you were to ask your child to list all the animals in a piece of text she has been reading a few minutes ago, that is a harder task than to give her that piece of paper and mark up all the animals. I know the comparison is lame, but you get the point.
Second, LLMs during their training time do not get a lot of Information Extraction tasks.
As Kai Zhang et al. points out in “Aligning Instruction Tasks Unlocks Large Language Models as Zero-Shot Relation Extractors”, LLMs are good at answering questions. LLMs have been trained, as if it were, to pass bar exams and look smart in a conversation — not so much to be a factory worker of knowledge. So, what Zhang suggests (and Xu mentions) is to cast extraction problems as discriminative questions.
This helps, but also brings out the third problem with using LLMs as information extractors: LLMs are terrible pleasers and hate to say “no”. If you offer a text to ChatGPT with the question: “Give me the companies and their founders mentioned in the text” — ChatGPT will not only dream up the founders for any company that is mentioned, but also try to think of things that are not really a company as a company. This is pleasing behavior that is not helpful if you want to get things right.
🌸 So, LLMs have issues. Who cares?
OK, so LLMs have issues. Does that curb our AI enthusiasm? No, please. Keep going.
But keep this in mind:
For general problems, just give the LLMs a spin. Your day-to-day work will be benefit even if only because you try
LLMs need application to your specific problem, and testing, and the usual
If you have a specific business problem, you are well advised to engage People Who’ve Done This Before.
Even in this day and age where everyone and their dog claims they’re an AI specialist, there are actually card-carrying AI-specialists who have an experience record in the field that starts before ChatGPT.
The outcome of a conversation with an AI-specialist may be that your information extraction job can be done at a fraction of the run-cost of Generative AI models, and better quality. Or you may hear that using LLMs is just fine for your use case. Either way, you’re having a conversation about the actual problem, instead of rushing to a solution.
🌸 Ask the specialist
You can find me on LinkedIn. Contact me with all questions regarding AI strategy, technology or whatnot. I’m interested!
🌸 Podcasts
There’s a lot more I could write about but I figure very few people will read this far anyways. If you did, you’re amazing and I appreciate you!
Love MusingsOnAI? Tell your friends!
If your company is interested in reaching an audience of AI professionals and decision-makers, reach us.
If you have any comments or feedback, just respond to this email!
Thanks for reading, Let’s explore the world together!
Raahul
Reply