Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions docs/providers/documentation/bedrock-provider.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
title: "Bedrock Provider"
description: "The Bedrock Provider allows for integrating AWS Bedrock foundation models into Keep."
---
import AutoGeneratedSnippet from '/snippets/providers/bedrock-snippet-autogenerated.mdx';

<Tip>
The Bedrock Provider supports querying multiple foundation models including Anthropic Claude, Meta Llama, and Amazon Titan for prompt-based interactions.
</Tip>

## Outputs

The Bedrock Provider outputs the response from the foundation model based on the prompt provided. Supports structured JSON output when configured.

## Supported Models

- **Anthropic Claude**: Uses Messages API format with system prompts
- **Meta Llama**: Prompt-based generation with configurable parameters
- **Amazon Titan Text**: Text generation with configurable parameters

## Connecting with the Provider

The Bedrock Provider supports two authentication methods:

### 1. SSO/IAM Role Authentication (Recommended)
Leave `access_key` and `secret_access_key` empty to use AWS SSO or IAM role authentication:

```yaml
authentication:
access_key: null
secret_access_key: null
region: "us-east-1"
```

### 2. Access Key Authentication
Provide explicit AWS credentials:

```yaml
authentication:
access_key: "your-access-key"
secret_access_key: "your-secret-key"
region: "us-east-1"
```

## Required Permissions

Your AWS credentials must have the following permissions:
- `bedrock:ListFoundationModels`
- `bedrock:InvokeModel`

## Configuration Parameters

- **model**: Foundation model ID (default: `meta.llama3-3-70b-instruct-v1:0`)
- **max_tokens**: Maximum tokens to generate (default: 1024)
- **temperature**: Controls randomness (0.0-1.0, default: 0.7)
- **top_p**: Controls diversity (0.0-1.0, default: 0.9)
- **structured_output_format**: JSON schema for structured responses

<AutoGeneratedSnippet />
8 changes: 8 additions & 0 deletions docs/providers/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,14 @@ By leveraging Keep Providers, users are able to deeply integrate Keep with the t
}
></Card>

<Card
title="Bedrock"
href="/providers/documentation/bedrock-provider"
icon={
<img src="https://img.logo.dev/amazon.com?token=pk_dfXfZBoKQMGDTIgqu7LvYg" />
}
></Card>

<Card
title="AppDynamics"
href="/providers/documentation/appdynamics-provider"
Expand Down
5 changes: 5 additions & 0 deletions keep-ui/app/(keep)/incidents/[id]/chat/incident-chat.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -547,6 +547,11 @@ export function IncidentChat({
// using 'incident-chat' class to apply styles only to that chat component
<Card className="h-full incident-chat">
<div className="chat-container">
<div className="flex items-center justify-between p-2 border-b border-tremor-border">
<span className="text-sm text-tremor-content-subtle">
AI Model: {process.env.NEXT_PUBLIC_BEDROCK_MODEL_ID || "Unknown"}
</span>
</div>
<div className="chat-messages">
<CopilotChat
className="-mx-2"
Expand Down
52 changes: 42 additions & 10 deletions keep-ui/app/api/copilotkit/route.ts
Original file line number Diff line number Diff line change
@@ -1,24 +1,56 @@
import {
CopilotRuntime,
OpenAIAdapter,
LangChainAdapter,
copilotRuntimeNextJSAppRouterEndpoint,
} from "@copilotkit/runtime";
import OpenAI, { OpenAIError } from "openai";
import { NextRequest } from "next/server";
import { ChatBedrockConverse } from "@langchain/aws";

export const POST = async (req: NextRequest) => {
function initializeCopilotRuntime() {
try {
const openai = new OpenAI({
organization: process.env.OPEN_AI_ORGANIZATION_ID,
apiKey: process.env.OPEN_AI_API_KEY,
});
const serviceAdapter = new OpenAIAdapter({
openai,
...(process.env.OPENAI_MODEL_NAME
? { model: process.env.OPENAI_MODEL_NAME }
: {}),
});
const provider = process.env.AI_PROVIDER || "openai";

let serviceAdapter;

if (provider === "bedrock") {
console.log("DEBUG: Using Bedrock model:", process.env.BEDROCK_MODEL_ID, "in region:", process.env.AWS_REGION);
const bedrockModel = new ChatBedrockConverse({
model: process.env.BEDROCK_MODEL_ID,
region: process.env.AWS_REGION,
// Use default credential provider chain (same as Bedrock provider)
// This will use AWS profile, SSO, or environment variables automatically
});

serviceAdapter = new LangChainAdapter({
chainFn: async ({ messages }) => {
// Filter out empty messages and ensure proper format
const filteredMessages = messages.filter(msg =>
msg.content &&
(typeof msg.content === 'string' && msg.content.trim().length > 0)
);

console.log("DEBUG: Filtered messages:", filteredMessages.length, "out of", messages.length);

const response = await bedrockModel.invoke(filteredMessages);
return response;
},
});
} else {
const openai = new OpenAI({
organization: process.env.OPEN_AI_ORGANIZATION_ID,
apiKey: process.env.OPEN_AI_API_KEY,
});
serviceAdapter = new OpenAIAdapter({
openai,
...(process.env.OPENAI_MODEL_NAME
? { model: process.env.OPENAI_MODEL_NAME }
: {}),
});
}

const runtime = new CopilotRuntime();
return { runtime, serviceAdapter };
} catch (error) {
Expand Down
Loading
Loading