← Back to Blog

Building an Embeddable AI Chatbot as a Micro-Frontend

June 15, 2025

Ever wanted to add a dynamic, AI-powered chatbot to your website but didn't want to rebuild your entire project or get tangled in complex integrations? A great solution is to build the chatbot as a standalone micro-frontend (MFE).

This approach lets you create a self-contained, embeddable application that can be injected into any website with just a few lines of code. In this post, we'll walk through the entire process, from planning and building the chatbot to deploying it and handling the technical hurdles like CORS.


The Plan: Vite + React + TypeScript

For our chatbot MFE, we need a tech stack that is lightweight, modern, and easy to configure for embedding. This makes Vite + React + TypeScript a perfect choice.

  • Vite: It's an incredibly fast build tool with a simple configuration. We can easily set it up to bundle our entire React application (JavaScript and CSS) into a single file.
  • React: It's ideal for building interactive user interfaces like a chat window.
  • TypeScript: It provides type safety, which is crucial for managing the data flowing between our UI, our backend, and the AI service.

The core idea is to build the chatbot as a complete, isolated application that has no dependency on the parent website it will live on.


The Backend API: Serverless Functions with LangChain

The chatbot's "brain", the part that communicates with an AI model, will be a serverless function that lives within the same chatbot project. This keeps everything neatly packaged together.

Instead of calling an AI provider's SDK directly, we'll use LangChain. LangChain is a powerful framework that acts as an "orchestrator," making it much easier to build complex, context-aware applications.

Using LangChain allows us to:

  1. Easily structure prompts and manage conversation history.
  2. Create "chains" that combine multiple steps, like retrieving data from a document before answering a question (a technique called RAG).
  3. Swap out different AI models with minimal code changes.

Our serverless function will use LangChain to process the incoming request and stream the response back to the user.

// /api/chat.ts
import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { BytesOutputParser } from "@langchain/core/output_parsers";

// Initialize the AI model
const model = new ChatOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  modelName: "gpt-4o",
});

// Create a prompt template to structure the input
const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant."],
  ["human", "{input}"],
]);

// Create the processing chain
const chain = prompt.pipe(model).pipe(new BytesOutputParser());

// The API handler
export async function POST(req: Request) {
  const { input } = await req.json();

  // Get a streaming response from the LangChain chain
  const stream = await chain.stream({ input });

  // Return the stream directly to the client
  return new Response(stream, {
    headers: { "Content-Type": "text/plain" },
  });
}

The Build Config for Embedding

To make our chatbot embeddable, we need to configure Vite to compile the entire application into a single JavaScript file. This makes injection into the host website trivial.

We'll modify the vite.config.ts file to change the build output.

// vite.config.ts
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'

export default defineConfig({
  plugins: [react()],
  build: {
    // Configure Rollup to create a single bundle
    rollupOptions: {
      output: {
        entryFileNames: `chatbot-bundle.js`,
        assetFileNames: `chatbot-bundle.[ext]`,
      },
    },
  },
})

When you run your build command, Vite will now produce a dist/ folder containing a single chatbot-bundle.js file, ready to be deployed and injected.


How to Inject it Into a Website

Now that we have a deployed chatbot bundle, we can inject it into any existing website. The modern way to do this is with dynamic script loading. No <iframe> is required!

First, in your MFE's entry point (main.tsx), make sure your React app knows which DOM element to mount itself into.

// chatbot-project/src/main.tsx
import React from 'react'
import ReactDOM from 'react-dom/client'
import App from './App' // Your main chatbot component
import './index.css'

// The ID of the div on the host page
const MOUNT_ELEMENT_ID = 'ai-chatbot-root';

const mountElement = document.getElementById(MOUNT_ELEMENT_ID);
if (mountElement) {
  ReactDOM.createRoot(mountElement).render(<App />);
}

Next, in your host website's code, create a simple React component that provides the mount point and loads the script.

// host-website/src/components/ChatbotEmbed.tsx
'use client';
import { useEffect } from 'react';

const CHATBOT_SCRIPT_URL = 'https://your-deployed-chatbot-url.com/chatbot-bundle.js';

export function ChatbotEmbed() {
  useEffect(() => {
    // Prevent adding the script multiple times
    if (document.querySelector(`script[src="${CHATBOT_SCRIPT_URL}"]`)) {
      return;
    }

    const script = document.createElement('script');
    script.src = CHATBOT_SCRIPT_URL;
    script.async = true;
    document.body.appendChild(script);

    return () => {
      // Clean up by removing the script when the component unmounts
      const existingScript = document.querySelector(`script[src="${CHATBOT_SCRIPT_URL}"]`);
      if (existingScript) {
        document.body.removeChild(existingScript);
      }
    };
  }, []);

  // Provide the div for the chatbot to mount into
  return <div id="ai-chatbot-root"></div>;
}

Handling CORS (Cross-Origin Resource Sharing)

Your chatbot UI, running on your-website.com, is trying to fetch data from an API running on a different domain, your-deployed-chatbot-url.com. Browsers will block this for security reasons unless the API server explicitly gives permission.

You must configure your backend API to send the correct CORS headers. Here is a generic way to handle this using the standard Web Response API, which works across most modern serverless platforms.

// /api/chat.ts
// ... (LangChain setup from before)

// The origin of the website where the chatbot is embedded
const ALLOWED_ORIGIN = 'https://your-website.com';

export async function POST(req: Request) {
  const origin = req.headers.get('origin');

  // Ensure the request is coming from our allowed website
  if (origin !== ALLOWED_ORIGIN) {
    return new Response("Forbidden", { status: 403 });
  }

  // Handle preflight "OPTIONS" requests for CORS
  if (req.method === 'OPTIONS') {
    return new Response(null, {
      status: 204,
      headers: {
        'Access-Control-Allow-Origin': ALLOWED_ORIGIN,
        'Access-Control-Allow-Methods': 'POST, OPTIONS',
        'Access-Control-Allow-Headers': 'Content-Type',
      },
    });
  }

  // --- Main Logic ---
  const { input } = await req.json();
  const stream = await chain.stream({ input });

  // Attach CORS headers to the actual streaming response
  return new Response(stream, {
    headers: {
      'Content-Type': 'text/plain',
      'Access-Control-Allow-Origin': ALLOWED_ORIGIN,
    },
  });
}

And that's it! You now have a complete, self-contained AI chatbot that can be deployed independently and embedded anywhere on the web, demonstrating a powerful and modern approach to building web applications.

For an example of this in action, check out the AI Chatbot Project page on this website.