Build a Text Summarizer in 100 Lines of Code with Vercel AI SDK

June 16, 2023

AI and Large Language Models (LLM) are all the rage right now, with large players like OpenAI and HuggingFace offering cutting-edge LLM-as-a-service products. Vercel recently announced their AI SDK, which wraps these LLM-as-a-service products into an incredibly easy to use SDK with modern features you’d expect from Vercel, like edge runtime and streaming support.

I wanted to see just how easy it was to get up and running with this Vercel AI SDK and build something simple, but with some potential usefulness – so I put together this demo of a “text summarizer” built into Formidable’s own Next.js website.

Demo of user highlighting text on a blog post and being prompted to summarize the text.

With roughly 100 lines of code, I was able to create a naive implementation of this text summarizer that allows the website consumer to highlight any text and then get a quick summary of that text using OpenAI’s completion API via the Vercel AI SDK. With little effort, this implementation uses Vercel and OpenAI’s edge runtimes for super-speedy responses, and streams the response back to the end-user for a streaming/non-blocking experience. Let’s take a quick peak at how this works.

Creating an API Route to Handle OpenAI Completion

Our frontend will proxy through an API route to communicate with OpenAI via the Vercel AI SDK. The Formidable website uses Next.js (with App Router), so we’ll add a file at src/api/summarize/route.ts to create an API route at /api/summarize, and we’ll add the following content.

// Go ahead and `yarn add ai openai-edge` import { OpenAIStream, StreamingTextResponse } from "ai"; import { Configuration, OpenAIApi } from "openai-edge"; // Configure our OpenAI API const config = new Configuration({ apiKey: process.env.OPENAI_API_KEY, // 💡 Add your OpenAI key to your .env }); const openai = new OpenAIApi(config); // Route handler to handle text on the page and stream back a response. export async function POST(req: Request) { const { text } = await req.json(); const response = await openai.createCompletion({ model: "text-davinci-003", stream: true, max_tokens: 1500, // 💡 Craft your own prompt here based on your needs. prompt: `Summarize the following text in two or less sentences: ${text}`, }); return new StreamingTextResponse(OpenAIStream(response)); } // 💡 Use Vercel's edge runtime. export const runtime = "edge";

We can now send a POST request to /api/summarize with a text body field, and the endpoint will stream back a response summarizing that text (in hopefully two or less sentences).

Hooking up the frontend

Now we’ll add a frontend component to consume this new API endpoint. We’ll create a client component that we’ll drop into our root layout, and start with some DOM logic to show a button whenever the user has some text selected.

"use client"; import * as React from "react"; import { Button } from "@/components/button/button"; export function Summarizer() { const [hasSelection, setHasSelection] = React.useState(false); // 💡 Listen for text selections on mouseup, and set state accordingly. React.useEffect(() => { const handler = () => { const selection = window.getSelection()?.toString(); const hasTextSelection = !!selection && selection.trim().length > 0; setHasSelection(hasTextSelection); }; document.addEventListener("mouseup", handler); return () => { document.removeEventListener("mouseup", handler); }; }, []); return ( <div className={`... ${hasSelection ? "visible" : "hidden"}`}> <Button>Summarize</Button> </div> ); }

While glossing over some edge cases, this gets us to a point of having a Summarize button show up whenever the user selects some text. Now we’ll add a click handler to our button that will make a request to our /api/summarize router with the selected text, and display the streamed result as it comes back!

"use client"; import * as React from "react"; import { Button } from "@/components/button/button"; export function Summarizer() { const [hasSelection, setHasSelection] = React.useState(false); const [isLoading, setIsLoading] = React.useState(false); const [summary, setSummary] = React.useState(""); // Listen for text selections on mouseup, and set state accordingly. React.useEffect(() => { /* ... */ }, []); // 💡 Handle summary request. This isn't perfect and doesn't gracefullly handle // errors from the server, but it's good enough for this demo. const handleSummarize = () => { setIsLoading(true); fetch("/api/summarize", { method: "POST", body: JSON.stringify({ text: window.getSelection()?.toString() || "" }), }) .then((res) => res.body) .then(async (body) => { if (!body) return; // 💡 Use response's ReadableStream body to read one chunk at a time, // appending each chunk to the end of our summary state. const reader = body.getReader(); while (true) { const { done, value } = await reader.read(); if (done) break; const newChunk = Buffer.from(value).toString("utf-8"); setSummary((old) => old + newChunk); } setIsLoading(false); }); }; return ( <div className={`... ${hasSelection ? "visible" : "hidden"}`}> {/* 💡 Add initial loading state, and the summary text. */} {isLoading && !summary && <div>Loading...</div>} {summary && <div>{summary}</div>} <Button>Summarize</Button> </div> ); }

Now we can drop this Summarizer client component into our root layout, and whenever the user selects text on the frontend, they should be offered an option to summarize that selected text. Neat!

Conclusion

Overall, the Vercel AI SDK makes working with LLM providers easy as 🍰 while still taking advantage of modern performance features like edge runtime execution and response streaming. In this simple demo, we took the OpenAI portion of the SDK for a spin by creating a text summarizer on the Formidable Next.js website.

If you want to know more about Vercel’s AI products, go ahead and check out their AI docs – they’re quite thorough, and filled with knowledge nuggets on LLM consumption in general.

(Disclaimer: this demo ignores plenty of edge and error cases, I didn’t want to litter it with details.)

Related Posts

Powering Our Website's Evolution: Next.js App Router and Sanity CMS in Action

June 15, 2023
What we've learned building web apps using Sanity and Next.js App Router that is great for content authors and great for end users while rearchitecting our website.

Victory Native Turns 40

November 17, 2023
Victory Native XL is a ground-up rewrite of Victory Native that diverges from the Victory Web API to leverage modern tooling in the React Native space to offer peak performance and flexibility.

Empowering Users: Developing Accessible Mobile Apps using React Native

June 26, 2023
Robust technical accessibility strategies and best practices we implemented for a mobile voting application using React Native.