AWS Bedrock Integration Guide with NeuroLink
Integrate AWS Bedrock with NeuroLink. Claude, Llama, and Titan models via AWS infrastructure.
You will connect AWS Bedrock to NeuroLink and access Claude, Llama, Titan, and Nova models through a single unified API. By the end of this guide, you will have IAM roles configured, models invoked with streaming, and multi-region deployments ready for production.
This tutorial covers initial setup, IAM configuration, model invocation patterns, and regional deployment strategies.
Understanding AWS Bedrock and NeuroLink Synergy
AWS Bedrock gives you serverless access to foundation models from Anthropic, Meta, Amazon, and others. You make API calls and pay only for what you use. NeuroLink adds intelligent routing, cost optimization, fallback handling, and unified observability on top of Bedrock.
Key Benefits of This Integration
Combining AWS Bedrock with NeuroLink gives you these advantages:
Unified Model Access: Access Claude, Llama, Titan, and other Bedrock models through a single NeuroLink endpoint. Your applications interact with one consistent API regardless of which underlying model handles the request.
Intelligent Cost Management: NeuroLink’s routing algorithms can direct requests to the most cost-effective model that meets your quality requirements, potentially reducing AI spend by 30-50% compared to always using the most capable model.
Enterprise Security: Leverage AWS’s robust security infrastructure including VPC endpoints, IAM policies, and encryption at rest while adding NeuroLink’s additional governance and audit capabilities.
Regional Resilience: Deploy across multiple AWS regions with automatic failover, ensuring your AI capabilities remain available even during regional outages.
Prerequisites and Initial Setup
Before starting, make sure you have the following in place:
AWS Account Requirements
Your AWS account needs specific configurations to support Bedrock:
- An active AWS account with billing enabled
- Access to the AWS regions where Bedrock is available
- Appropriate service quotas for your expected usage
- IAM permissions to create roles, policies, and access Bedrock
Enabling Bedrock Model Access
AWS Bedrock requires explicit model access enablement before you can use specific foundation models. Navigate to the Bedrock console and complete these steps:
1
2
# List available foundation models and check your current access status
aws bedrock list-foundation-models --region us-east-1
For each model you plan to use, submit an access request through the Bedrock console. Anthropic’s Claude models typically receive instant approval, while some models may require additional review.
NeuroLink SDK Installation
Install the NeuroLink SDK in your project:
1
npm install @juspay/neurolink
Ensure you have your AWS credentials configured for Bedrock access.
IAM Configuration for Bedrock Access
Proper IAM configuration is critical for secure Bedrock access. We recommend creating a dedicated IAM role for NeuroLink with precisely scoped permissions.
Creating the NeuroLink Bedrock Role
Create an IAM role that NeuroLink will assume to access your Bedrock resources:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "your-neurolink-external-id"
}
}
}
]
}
Replace the account ID with NeuroLink’s AWS account ID (provided in your dashboard) and generate a unique external ID for additional security.
Bedrock Access Policy
Attach a policy that grants the minimum necessary permissions for Bedrock operations:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "BedrockModelInvocation",
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": [
"arn:aws:bedrock:*::foundation-model/anthropic.claude-3-opus-20240229-v1:0",
"arn:aws:bedrock:*::foundation-model/anthropic.claude-3-sonnet-20240229-v1:0",
"arn:aws:bedrock:*::foundation-model/anthropic.claude-3-haiku-20240307-v1:0",
"arn:aws:bedrock:*::foundation-model/meta.llama3-70b-instruct-v1:0",
"arn:aws:bedrock:*::foundation-model/amazon.titan-text-express-v1"
]
},
{
"Sid": "BedrockModelListing",
"Effect": "Allow",
"Action": [
"bedrock:ListFoundationModels",
"bedrock:GetFoundationModel"
],
"Resource": "*"
}
]
}
This policy restricts access to specific models while allowing model discovery. Adjust the resource ARNs based on which models your organization has approved.
Optional: VPC Endpoint Configuration
For enhanced security, configure a VPC endpoint for Bedrock to keep traffic within the AWS network:
1
2
3
4
5
6
7
# Create VPC endpoint for Bedrock
aws ec2 create-vpc-endpoint \
--vpc-id vpc-12345678 \
--service-name com.amazonaws.us-east-1.bedrock-runtime \
--vpc-endpoint-type Interface \
--subnet-ids subnet-12345678 subnet-87654321 \
--security-group-ids sg-12345678
Configuring NeuroLink Provider Settings
Now that AWS is configured, you will set up NeuroLink to connect to your Bedrock resources.
Configuring AWS Credentials
Configure your AWS credentials using environment variables:
1
2
3
export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"
export AWS_REGION="us-east-1"
Alternatively, use AWS profiles or IAM roles if running on AWS infrastructure.
Verifying the Connection
Test your configuration using the NeuroLink CLI:
1
2
# Send a test request to Bedrock
npx @juspay/neurolink generate "Respond with 'Connection successful' if you receive this message." --provider bedrock
Working with Bedrock Models Through NeuroLink
With the connection verified, you will now invoke Bedrock models through NeuroLink’s unified API.
Basic Model Invocation
Use the standard NeuroLink API to invoke Bedrock models:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import { NeuroLink } from '@juspay/neurolink';
const neurolink = new NeuroLink();
// Using Claude 3.5 Sonnet on Bedrock
const response = await neurolink.generate({
input: { text: "Explain quantum computing in simple terms." },
provider: "bedrock",
model: "anthropic.claude-3-5-sonnet-20241022-v2:0",
systemPrompt: "You are a helpful assistant.",
maxTokens: 1024,
temperature: 0.7
});
console.log(response.content);
Using Different Claude Models
NeuroLink supports all Claude models available on Bedrock:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import { NeuroLink, BedrockModels } from '@juspay/neurolink';
const neurolink = new NeuroLink();
// Claude 4.5 Sonnet - Latest model
const latestResponse = await neurolink.generate({
input: { text: "Analyze this complex business scenario..." },
provider: "bedrock",
model: BedrockModels.CLAUDE_4_5_SONNET, // "anthropic.claude-sonnet-4-5-20250929-v1:0"
maxTokens: 4096
});
// Claude 3.5 Haiku - Fast and cost-effective
const quickResponse = await neurolink.generate({
input: { text: "Quick summary of cloud computing benefits" },
provider: "bedrock",
model: BedrockModels.CLAUDE_3_5_HAIKU, // "anthropic.claude-3-5-haiku-20241022-v1:0"
maxTokens: 512
});
Global Cross-Region Inference
For multi-region deployments with automatic failover:
1
2
3
4
5
6
7
8
9
const neurolink = new NeuroLink();
// Use global inference prefix in the model name for cross-region routing
const response = await neurolink.generate({
input: { text: "Your prompt here" },
provider: 'bedrock',
model: 'global.anthropic.claude-sonnet-4-5-20250929-v1:0',
region: 'us-east-1'
});
Benefits: automatic failover, lower latency, higher availability, same pricing.
Streaming Responses
Enable streaming for real-time response handling:
Tip: Bedrock integration uses AWS SDK’s native
ConverseStreamCommandunder the hood. The interface below is NeuroLink’s unified streaming API – you write the same code regardless of provider.
1
2
3
4
5
6
7
8
9
10
11
12
const result = await neurolink.stream({
input: { text: "Explain the benefits of cloud computing in detail..." },
provider: "bedrock",
model: "anthropic.claude-3-5-sonnet-20241022-v2:0",
maxTokens: 4096
});
for await (const chunk of result.stream) {
if ('content' in chunk) {
process.stdout.write(chunk.content);
}
}
Working with Llama Models
Meta’s Llama models on Bedrock work seamlessly with NeuroLink. The latest addition is Llama 4, featuring a Mixture of Experts (MoE) architecture and an impressive 10 million token context window, making it ideal for processing extensive documents and maintaining long conversations.
Warning: While Llama 4 Scout natively supports up to 10M tokens, AWS Bedrock currently caps this at 3.5M tokens. Plan your context usage accordingly – AWS has announced plans to expand this limit.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import { NeuroLink, BedrockModels } from '@juspay/neurolink';
const neurolink = new NeuroLink();
// Llama 4 Scout - MoE architecture with 10M token context
const llama4Response = await neurolink.generate({
input: { text: "Analyze this lengthy document and provide insights..." },
provider: "bedrock",
model: BedrockModels.LLAMA_4_SCOUT_17B, // "meta.llama4-scout-17b-instruct-v1:0"
systemPrompt: "You are an expert analyst.",
maxTokens: 4096
});
// Llama 3.3 70B for complex tasks
const response = await neurolink.generate({
input: { text: "Write a Python function to calculate fibonacci numbers efficiently." },
provider: "bedrock",
model: BedrockModels.LLAMA_3_3_70B, // "meta.llama3-3-70b-instruct-v1:0"
systemPrompt: "You are an expert programmer.",
maxTokens: 2048
});
console.log(response.content);
Using Amazon Nova Models
Amazon’s Nova models offer excellent performance for enterprise use cases. The newer Nova 2 family introduces models like Nova 2 Lite and Nova 2 Sonic with up to 1 million token context windows, providing flexible options for different complexity requirements:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import { NeuroLink, BedrockModels } from '@juspay/neurolink';
const neurolink = new NeuroLink();
// Nova 2 Lite for efficient reasoning with 1M context
const advancedResponse = await neurolink.generate({
input: { text: "Perform deep analysis on this complex dataset..." },
provider: "bedrock",
model: BedrockModels.NOVA_2_LITE, // "amazon.nova-2-lite-v1:0"
maxTokens: 4096
});
// Nova Pro for balanced performance
const response = await neurolink.generate({
input: { text: "Summarize the key features of cloud computing." },
provider: "bedrock",
model: BedrockModels.NOVA_PRO, // "amazon.nova-pro-v1:0"
maxTokens: 1024
});
// Nova Lite for faster responses
const quickResponse = await neurolink.generate({
input: { text: "What is serverless computing?" },
provider: "bedrock",
model: BedrockModels.NOVA_LITE, // "amazon.nova-lite-v1:0"
maxTokens: 512
});
Note: Model names and IDs in code examples reflect versions available at time of writing. Model availability, naming conventions, and pricing change frequently. Always verify current model IDs with your provider’s documentation before deploying to production.
Regional deployment strategies
Next, you will set up multi-region routing for compliance, latency optimization, and disaster recovery.
Latency-Based Routing
Optimize response times by routing requests to the nearest region:
1
2
3
4
5
6
7
8
9
10
11
import { NeuroLink, BedrockModels } from '@juspay/neurolink';
const neurolink = new NeuroLink();
// Specify region for latency optimization
const response = await neurolink.generate({
input: { text: "Quick question about weather." },
provider: "bedrock",
model: BedrockModels.CLAUDE_3_5_HAIKU,
region: "eu-west-1" // Route to nearest region
});
Monitoring and Observability
NeuroLink Dashboard Metrics
The NeuroLink dashboard provides comprehensive visibility into your Bedrock usage:
- Request volume by model and region
- Latency percentiles (p50, p95, p99)
- Error rates and types
- Token usage and costs
- Cache hit rates
Security best practices
Credential Rotation
Implement regular rotation of IAM role credentials:
1
2
3
4
5
6
7
# Generate new external ID
NEW_EXTERNAL_ID=$(uuidgen)
# Update IAM role trust policy
aws iam update-assume-role-policy \
--role-name NeuroLinkBedrockRole \
--policy-document file://updated-trust-policy.json
Ensure your application retrieves updated credentials from AWS Secrets Manager or your preferred secrets management solution.
Troubleshooting common issues
Access Denied Errors
If you encounter access denied errors, verify:
- The IAM role trust policy includes the correct NeuroLink account ID
- The external ID matches between IAM and NeuroLink configuration
- The Bedrock access policy includes all required models
- Model access has been enabled in the Bedrock console
1
2
3
4
5
# Test role assumption
aws sts assume-role \
--role-arn arn:aws:iam::YOUR_ACCOUNT:role/NeuroLinkBedrockRole \
--role-session-name test-session \
--external-id your-neurolink-external-id
Throttling Issues
If you experience throttling, consider:
- Requesting quota increases through AWS Service Quotas
- Implementing exponential backoff in your application
- Distributing load across multiple regions
- Using NeuroLink’s built-in retry mechanisms
Model Not Found Errors
Ensure the model ID in NeuroLink matches the exact Bedrock model identifier:
1
2
3
# List exact model IDs
aws bedrock list-foundation-models --region us-east-1 \
--query 'modelSummaries[*].modelId' --output table
What you built
You now have a working AWS Bedrock integration with NeuroLink that includes secure IAM access, multi-model invocation with streaming, and multi-region deployment. From here, add cost-based routing to direct requests to the cheapest model that meets your quality bar, or set up failover chains across regions for maximum availability.
For additional support, consult the NeuroLink GitHub repository or open an issue. Watch the repository to stay informed about new Bedrock models and features as they become available.
Related posts:
