Post

Migrating from Vercel AI SDK to NeuroLink

Complete guide to migrating from Vercel AI SDK to NeuroLink. Learn streaming patterns, API routes, and Next.js integration with working code examples.

Migrating from Vercel AI SDK to NeuroLink

By the end of this guide, you’ll have migrated your Vercel AI SDK application to NeuroLink with side-by-side code comparisons, pattern translations, and a step-by-step migration path.

Verification Details: This migration guide was verified with NeuroLink v8.32.0 and Vercel AI SDK v5.x.

Why Migrate from Vercel AI SDK?

Before diving into the migration process, let’s understand why teams choose to migrate from Vercel AI SDK to NeuroLink:

Provider Flexibility

While Vercel AI SDK supports multiple providers, NeuroLink offers unified access to 13 providers through a single API. Switch between OpenAI, Anthropic, Google, Mistral, and other providers without changing your application code.

Cost Optimization

NeuroLink’s cost optimization features can significantly reduce costs through:

  • Automatic cheapest model selection
  • Intelligent provider routing
  • Cost-aware fallback strategies

Note: Actual savings depend on your usage patterns. NeuroLink provides the tooling for cost optimization - configure based on your requirements.

Enhanced Streaming

NeuroLink provides consistent streaming behavior across all providers, including those that don’t natively support streaming. You get the same streaming API whether you’re using GPT-4, Claude, or Llama.

Production Features

Built-in rate limiting, automatic retries, request queuing, and comprehensive observability come standard with NeuroLink, features that often require additional infrastructure with Vercel AI SDK.

Note on AI SDK 6: Vercel released AI SDK 6 in December 2025 with significant new capabilities including full MCP (Model Context Protocol) support, an Agent abstraction for building reusable agents, and DevTools integration. With over 20 million monthly downloads, the AI SDK continues to evolve. Evaluate your specific requirements - if MCP support or the new agent patterns are critical to your use case, compare both SDKs’ implementations before deciding on your migration path.

Understanding the Architecture Differences

Vercel AI SDK Architecture

The Vercel AI SDK typically follows this pattern:

1
2
3
4
5
6
7
8
9
10
11
12
13
// Vercel AI SDK pattern
import { createOpenAI } from '@ai-sdk/openai';
import { generateText, streamText } from 'ai';

const openai = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

// Direct provider coupling
const result = await generateText({
  model: openai('gpt-4'),
  prompt: 'Hello, world!',
});

NeuroLink abstracts the provider layer with a clean, unified API:

1
2
3
4
5
6
7
8
9
10
11
// NeuroLink pattern
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

// Provider-agnostic - specify provider and model separately
const result = await neurolink.generate({
  input: { text: 'Hello, world!' },
  provider: 'openai',
  model: 'gpt-4',
});

Step-by-Step Migration Guide

First, install the NeuroLink SDK alongside your existing Vercel AI SDK installation:

1
2
3
4
npm install @juspay/neurolink
yarn add @juspay/neurolink
# or
pnpm add @juspay/neurolink

Step 2: Configure Environment Variables

Add your provider API keys to your environment:

1
2
3
# .env.local
# You can keep existing keys during migration
OPENAI_API_KEY=your_openai_key  # Optional: for comparison

Create a centralized client configuration:

1
2
3
4
// lib/neurolink.ts
import { NeuroLink } from '@juspay/neurolink';

export const neurolink = new NeuroLink();

Migrating Core Patterns

Text Generation Migration

Vercel AI SDK (Before):

1
2
3
4
5
6
7
8
9
10
11
12
13
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

async function generateResponse(prompt: string) {
  const { text } = await generateText({
    model: openai('gpt-4'),
    prompt,
    maxTokens: 1000,
    temperature: 0.7,
  });

  return text;
}

NeuroLink (After):

1
2
3
4
5
6
7
8
9
10
11
12
13
import { neurolink } from '@/lib/neurolink';

async function generateResponse(prompt: string) {
  const result = await neurolink.generate({
    input: { text: prompt },
    provider: 'openai',
    model: 'gpt-4',
    maxTokens: 1000,
    temperature: 0.7,
  });

  return result.content;
}

Streaming Text Migration

Streaming is where NeuroLink shines, providing consistent behavior across all providers.

Vercel AI SDK (Before):

1
2
3
4
5
6
7
8
9
10
11
12
13
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

async function streamResponse(prompt: string) {
  const result = await streamText({
    model: openai('gpt-4'),
    prompt,
  });

  for await (const chunk of result.textStream) {
    process.stdout.write(chunk);
  }
}

NeuroLink (After):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import { neurolink } from '@/lib/neurolink';

async function streamResponse(prompt: string) {
  const result = await neurolink.stream({
    input: { text: prompt },
    provider: 'openai',
    model: 'gpt-4',
  });

  for await (const chunk of result.stream) {
    if ('content' in chunk) {
      process.stdout.write(chunk.content);
    }
  }
}

Chat Conversations Migration

Vercel AI SDK (Before):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

type Message = {
  role: 'user' | 'assistant' | 'system';
  content: string;
};

async function chat(messages: Message[]) {
  const { text } = await generateText({
    model: openai('gpt-4'),
    messages,
  });

  return text;
}

NeuroLink (After):

1
2
3
4
5
6
7
8
9
10
11
12
13
import { neurolink } from '@/lib/neurolink';
import type { ChatMessage } from '@juspay/neurolink';

async function chat(messages: ChatMessage[]) {
  const result = await neurolink.generate({
    input: { text: messages[messages.length - 1].content },
    conversationMessages: messages,
    provider: 'openai',
    model: 'gpt-4',
  });

  return result.content;
}

Next.js API Routes Migration

Basic API Route

Vercel AI SDK (Before):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// app/api/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = await streamText({
    model: openai('gpt-4'),
    messages,
  });

  return result.toAIStreamResponse();
}

NeuroLink (After):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// app/api/chat/route.ts
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const result = await neurolink.stream({
    input: { text: prompt },
    provider: 'openai',
    model: 'gpt-4',
  });

  // Convert to ReadableStream for Response
  const encoder = new TextEncoder();
  const readable = new ReadableStream({
    async start(controller) {
      for await (const chunk of result.stream) {
        if ('content' in chunk) {
          controller.enqueue(encoder.encode(chunk.content));
        }
      }
      controller.close();
    },
  });

  return new Response(readable, {
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      Connection: 'keep-alive',
    },
  });
}

Non-Streaming API Route

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// app/api/generate/route.ts
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const result = await neurolink.generate({
    input: { text: prompt },
    provider: 'openai',
    model: 'gpt-4',
  });

  return Response.json({ result });
}

Server Actions Migration

Next.js Server Actions work seamlessly with NeuroLink.

Vercel AI SDK (Before):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
// app/actions.ts
'use server';

import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function generateContent(prompt: string) {
  const { text } = await generateText({
    model: openai('gpt-4'),
    prompt,
  });

  return text;
}

NeuroLink (After):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// app/actions.ts
'use server';

import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

export async function generateContent(prompt: string) {
  const result = await neurolink.generate({
    input: { text: prompt },
    provider: 'openai',
    model: 'gpt-4',
  });

  return result.content;
}

// Easy model switching for cost optimization
export async function generateContentCheap(prompt: string) {
  const result = await neurolink.generate({
    input: { text: prompt },
    provider: 'anthropic',
    model: 'claude-3-haiku-20240307', // Cheaper model
  });

  return result.content;
}

Streaming Patterns Deep Dive

Text Streaming with Callbacks

NeuroLink provides clean streaming with async iteration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

async function streamWithProgress(prompt: string, onChunk: (chunk: string) => void) {
  let fullContent = '';

  const result = await neurolink.stream({
    input: { text: prompt },
    provider: 'openai',
    model: 'gpt-4',
  });

  for await (const chunk of result.stream) {
    if ('content' in chunk) {
      fullContent += chunk.content;
      onChunk(chunk.content);
    }
  }

  return fullContent;
}

// Usage
await streamWithProgress('Tell me a story', (chunk) => {
  process.stdout.write(chunk);
});

Server-Sent Events (SSE)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
// app/api/stream/route.ts
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

export async function POST(req: Request) {
  const { prompt } = await req.json();

  const encoder = new TextEncoder();

  const readable = new ReadableStream({
    async start(controller) {
      const result = await neurolink.stream({
        input: { text: prompt },
        provider: 'openai',
        model: 'gpt-4',
      });

      for await (const chunk of result.stream) {
        if ('content' in chunk) {
          controller.enqueue(
            encoder.encode(`data: ${JSON.stringify({ content: chunk.content })}\n\n`)
          );
        }
      }

      controller.enqueue(encoder.encode('data: [DONE]\n\n'));
      controller.close();
    },
  });

  return new Response(readable, {
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      Connection: 'keep-alive',
    },
  });
}

Client-Side SSE Consumption

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
// components/StreamingChat.tsx
'use client';

import { useState } from 'react';

export function StreamingChat() {
  const [response, setResponse] = useState('');
  const [isLoading, setIsLoading] = useState(false);

  async function handleSubmit(prompt: string) {
    setIsLoading(true);
    setResponse('');

    const res = await fetch('/api/stream', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt }),
    });

    const reader = res.body?.getReader();
    const decoder = new TextDecoder();

    if (!reader) return;

    while (true) {
      const { done, value } = await reader.read();
      if (done) break;

      const text = decoder.decode(value);
      const lines = text.split('\n');

      for (const line of lines) {
        if (line.startsWith('data: ') && line !== 'data: [DONE]') {
          const data = JSON.parse(line.slice(6));
          setResponse((prev) => prev + data.content);
        }
      }
    }

    setIsLoading(false);
  }

  return (
    <div>
      <button onClick={() => handleSubmit('Tell me a joke')} disabled={isLoading}>
        {isLoading ? 'Generating...' : 'Generate'}
      </button>
      <div>{response}</div>
    </div>
  );
}

Advanced Migration Patterns

Provider Fallback Pattern

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

interface ProviderConfig {
  provider: string;
  model: string;
}

async function generateWithFallback(prompt: string) {
  const providers: ProviderConfig[] = [
    { provider: 'openai', model: 'gpt-4' },
    { provider: 'anthropic', model: 'claude-sonnet-4-5-20250929' },
    { provider: 'google-ai', model: 'gemini-2.0-flash' },
  ];

  for (const { provider, model } of providers) {
    try {
      const result = await neurolink.generate({
        input: { text: prompt },
        provider,
        model,
      });

      return {
        content: result.content,
        provider,
        model,
      };
    } catch (error) {
      console.warn(`Provider ${provider}/${model} failed, trying next...`);
      continue;
    }
  }

  throw new Error('All providers failed');
}

Parallel Generation Pattern

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

async function generateMultiple(prompt: string) {
  const providers = [
    { provider: 'openai', model: 'gpt-4' },
    { provider: 'anthropic', model: 'claude-sonnet-4-5-20250929' },
  ];

  const results = await Promise.allSettled(
    providers.map(({ provider, model }) =>
      neurolink.generate({
        input: { text: prompt },
        provider,
        model,
      })
    )
  );

  return results
    .filter((r): r is PromiseFulfilledResult<{ content: string }> => r.status === 'fulfilled')
    .map((r, i) => ({
      provider: providers[i].provider,
      model: providers[i].model,
      result: r.value.content,
    }));
}

Middleware Integration

Create custom middleware for request processing:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// middleware/neurolink.ts
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

async function checkRateLimit(userId: string): Promise<boolean> {
  // Your rate limiting logic
  return true;
}

async function logUsage(userId: string, content: string): Promise<void> {
  // Your logging logic
  console.log(`User ${userId} generated ${content.length} chars`);
}

export async function generateWithMiddleware(userId: string, prompt: string) {
  // Check rate limits
  const allowed = await checkRateLimit(userId);
  if (!allowed) {
    throw new Error('Rate limit exceeded');
  }

  const result = await neurolink.generate({
    input: { text: prompt },
    provider: 'openai',
    model: 'gpt-4',
  });

  // Log usage
  await logUsage(userId, result.content);

  return result.content;
}

Streaming with Abort Controller

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

async function streamWithAbort(prompt: string, signal: AbortSignal) {
  const streamResult = await neurolink.stream({
    input: { text: prompt },
    provider: 'openai',
    model: 'gpt-4',
  });

  let result = '';

  for await (const chunk of streamResult.stream) {
    if (signal.aborted) {
      console.log('Stream aborted');
      break;
    }
    if ('content' in chunk) {
      result += chunk.content;
      process.stdout.write(chunk.content);
    }
  }

  return result;
}

// Usage with timeout
const controller = new AbortController();
setTimeout(() => controller.abort(), 30000); // 30 second timeout

await streamWithAbort('Write a long story', controller.signal);

Testing Your Migration

Unit Tests

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
// __tests__/neurolink.test.ts
import { NeuroLink } from '@juspay/neurolink';
import { describe, it, expect } from 'vitest';

describe('NeuroLink Migration', () => {
  const neurolink = new NeuroLink();

  it('should generate text response', async () => {
    const result = await neurolink.generate({
      input: { text: 'Say hello' },
      provider: 'openai',
      model: 'gpt-4',
    });

    expect(result).toBeTruthy();
    expect(result.content).toBeTruthy();
    expect(typeof result.content).toBe('string');
  });

  it('should stream responses', async () => {
    const chunks: string[] = [];

    const result = await neurolink.stream({
      input: { text: 'Count to 5' },
      provider: 'openai',
      model: 'gpt-4',
    });

    for await (const chunk of result.stream) {
      if ('content' in chunk) {
        chunks.push(chunk.content);
      }
    }

    expect(chunks.length).toBeGreaterThan(0);
  });

  it('should handle provider switching', async () => {
    const providers = [
      { provider: 'openai', model: 'gpt-4' },
      { provider: 'anthropic', model: 'claude-sonnet-4-5-20250929' },
    ];

    for (const { provider, model } of providers) {
      const result = await neurolink.generate({
        input: { text: 'Hello' },
        provider,
        model,
      });

      expect(result).toBeTruthy();
    }
  });
});

Integration Tests

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
// __tests__/integration/api.test.ts
import { describe, it, expect } from 'vitest';

describe('API Route Integration', () => {
  const baseUrl = process.env.TEST_BASE_URL || 'http://localhost:3000';

  it('should handle generate endpoint', async () => {
    const response = await fetch(`${baseUrl}/api/generate`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt: 'Say hello' }),
    });

    const data = await response.json();
    expect(data.result).toBeTruthy();
  });

  it('should handle streaming endpoint', async () => {
    const response = await fetch(`${baseUrl}/api/stream`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt: 'Count to 3' }),
    });

    expect(response.headers.get('Content-Type')).toBe('text/event-stream');

    const reader = response.body?.getReader();
    expect(reader).toBeTruthy();

    const { value } = await reader!.read();
    expect(value).toBeTruthy();
  });
});

Migration Checklist

Use this checklist to ensure a complete migration:

Dependencies

  • Install @juspay/neurolink
  • Add provider API keys (e.g., OPENAI_API_KEY) to environment

Client Configuration

  • Create centralized NeuroLink client
  • Configure default settings
  • Set up error handling

API Endpoints

  • Migrate /api/chat routes
  • Migrate /api/completion routes
  • Update streaming responses to use async iteration
  • Test SSE implementation

Server Actions

  • Migrate generateText calls to neurolink.generate()
  • Migrate streamText calls to neurolink.stream()
  • Update function signatures

Frontend Components

  • Replace Vercel AI SDK fetch calls with new API structure
  • Update streaming consumption logic
  • Implement custom state management for chat

Testing

  • Run unit tests
  • Run integration tests
  • Test streaming behavior
  • Verify error handling

Cleanup

  • Remove Vercel AI SDK dependencies (ai, @ai-sdk/*)
  • Remove unused provider SDKs
  • Update documentation

Common Migration Patterns

Before and After Summary

Vercel AI SDKNeuroLink
import { generateText } from 'ai'import { NeuroLink } from '@juspay/neurolink'
generateText({ model: openai('gpt-4'), prompt })neurolink.generate({ input: { text: prompt }, provider: 'openai', model: 'gpt-4' })
streamText({ model, prompt })neurolink.stream({ input: { text: prompt }, provider, model })
result.textStreamresult.stream (async iterable)
result.toAIStreamResponse()Custom ReadableStream response

Key Differences

  1. Package: Use @juspay/neurolink instead of ai or @ai-sdk/*
  2. Input format: Use input: { text: prompt } instead of prompt or messages
  3. Provider/Model: Specify provider and model as separate fields
  4. Response: Result is returned directly, not wrapped in an object
  5. Streaming: Returns an async iterable, iterate with for await

Conclusion

By now you have working migration patterns for every Vercel AI SDK concept: generateText, streamText, Next.js API routes, Server Actions, and streaming responses. The key mappings are:

  • ai / @ai-sdk/* becomes @juspay/neurolink
  • generateText() becomes neurolink.generate()
  • streamText() becomes neurolink.stream() with for await iteration
  • Provider and model are specified as separate parameters for flexibility
  • Next.js integration works with both API routes and Server Actions

Migrate one route at a time, validate in parallel, and remove the Vercel AI SDK once all routes are confirmed working.

Additional Resources


Related posts:

This post is licensed under CC BY 4.0 by the author.