Skip to main content
POST
https://dashboard.laburen.com
/
api
/
agents
/
{agentId}
/
query
curl --location --request POST 'https://dashboard.laburen.com/api/agents/<agentId>/query' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <API_KEY>' \
--data-raw '{
    "query": "Hello, I need help with my order",
    "conversationId": "clxxxxxxxxxxxxxxxxx",
    "visitorId": "clxxxxxxxxxxxxxxxxx"
}'
{
  "answer": "Hello! I'd be happy to help you with your order. Could you please provide your order number so I can look it up?",
  "usage": {
    "completionTokens": 28,
    "promptTokens": 1250,
    "totalTokens": 1278,
    "cost": 0.0064
  },
  "sources": [
    {
      "source": "FAQ.pdf",
      "chunk": "To check your order status, please provide your order number..."
    }
  ],
  "approvals": [],
  "messageId": "clxxxxxxxxxxxxxxxxx",
  "conversationId": "clxxxxxxxxxxxxxxxxx",
  "visitorId": "clxxxxxxxxxxxxxxxxx",
  "request_human": false,
  "status": "UNRESOLVED"
}
This endpoint allows you to send messages to an AI agent and receive responses. It supports:
  • Simple text queries
  • File attachments (documents, images, audio)
  • Real-time response streaming
  • Continuation of existing conversations
  • Contact association (CRM)

Path

agentId
string
required
The ID of the agent you want to query (CUID format).

Body

Required

query
string
required
The user’s message or question.

Optional - Basic Configuration

streaming
boolean
default:"false"
If true, responds with Server-Sent Events in real-time.
conversationId
string
ID to continue an existing conversation. Auto-generated if not provided.
visitorId
string
Unique ID of the visitor/user. Auto-generated if not provided.
channel
string
default:"api"
Source channel for the message. Valid values: api, dashboard, website, form, whatsapp, telegram, slack, meta, crisp, zapier, mail, mercadolibre, agent_builder, chatwoot, crmchatsappai.
context
string
Additional context for the AI (e.g., specific instructions).

Optional - Webhook (Conversational Mode)

webhookUrl
string
URL to receive the AI response asynchronously. Required when the agent has conversational mode enabled and channel is api.The webhook URL must:
  • Be a valid URL format
  • Use http:// or https:// protocol
  • Be reachable (the server validates connectivity)
  • Not point to localhost or private IPs in production (SSRF protection)

Optional - File Attachments

attachments
array
List of file attachments.

Optional - Contact (CRM)

contact
object
Contact data to associate with the conversation.

Response (without streaming)

answer
string
The agent’s response.
sources
array
Sources used to generate the response.
messageId
string
ID of the response message.
conversationId
string
ID of the conversation (save this to continue the conversation).
visitorId
string
ID of the visitor.
request_human
boolean
Whether the agent requested human intervention.
status
string
Conversation status (e.g., UNRESOLVED, RESOLVED).
usage
object
Token usage and cost information.
approvals
array
List of approvals for tool executions (if any).

Response (Conversational Mode with Webhook)

When the agent has conversational mode enabled and you provide a webhookUrl, the endpoint returns immediately with a queued status:
status
string
Always "queued" for webhook mode.
conversationId
string
ID of the conversation.
visitorId
string
ID of the visitor.
inputMessageId
string
ID of the user’s input message.
message
string
Confirmation message indicating the request was queued.
webhookUrl
string
The webhook URL where the response will be sent.

Webhook Payload

After processing (typically 8 seconds delay to batch multiple messages), a POST request is sent to your webhookUrl with the following payload:
{
  "conversationId": "clxxxxxxxxxxxxxxxxx",
  "visitorId": "visitor-abc123",
  "status": "success",
  "messages": [
    {
      "id": "msg_001",
      "text": "First part of the response...",
      "createdAt": "2024-01-15T10:30:00.000Z"
    },
    {
      "id": "msg_002",
      "text": "Second part of the response...",
      "createdAt": "2024-01-15T10:30:00.000Z"
    }
  ],
  "agentResponse": {
    "answer": "Complete response without splitting...",
    "sources": [],
    "usage": {
      "completionTokens": 150,
      "promptTokens": 500,
      "totalTokens": 650,
      "cost": 0.0065
    },
    "metadata": {}
  },
  "messageId": "msg_answer456"
}
Webhook Headers:
  • Content-Type: application/json
  • X-Laburen-Event: agent.response
In conversational mode, long responses are automatically split into multiple messages (maximum 3 for most channels, up to 10 for Instagram/Meta).

Streaming Response

When streaming: true, the endpoint responds with Server-Sent Events:
Content-Type: text/event-stream

event: answer
data: Hello,

event: answer
data:  how can

event: answer
data:  I help you?

event: endpoint_response
data: {"messageId":"...","answer":"...","conversationId":"..."}

data: [DONE]
Events:
  • answer: Partial response text (concatenate to build the full answer)
  • endpoint_response: Full response object (JSON) with all metadata

Error Responses

Status CodeTypeDescription
400Bad RequestInvalid channel value. Returns list of valid channels.
400Bad RequestMissing webhookUrl when agent has conversational mode enabled and channel is api.
400Bad RequestInvalid webhookUrl (malformed URL, invalid protocol, unreachable server).
401UNAUTHORIZEDInvalid API Key or insufficient permissions
404NOT_FOUNDAgent not found
500Internal ErrorError processing the message
curl --location --request POST 'https://dashboard.laburen.com/api/agents/<agentId>/query' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <API_KEY>' \
--data-raw '{
    "query": "Hello, I need help with my order",
    "conversationId": "clxxxxxxxxxxxxxxxxx",
    "visitorId": "clxxxxxxxxxxxxxxxxx"
}'
{
  "answer": "Hello! I'd be happy to help you with your order. Could you please provide your order number so I can look it up?",
  "usage": {
    "completionTokens": 28,
    "promptTokens": 1250,
    "totalTokens": 1278,
    "cost": 0.0064
  },
  "sources": [
    {
      "source": "FAQ.pdf",
      "chunk": "To check your order status, please provide your order number..."
    }
  ],
  "approvals": [],
  "messageId": "clxxxxxxxxxxxxxxxxx",
  "conversationId": "clxxxxxxxxxxxxxxxxx",
  "visitorId": "clxxxxxxxxxxxxxxxxx",
  "request_human": false,
  "status": "UNRESOLVED"
}

Streaming Example (JavaScript)

import {
  EventStreamContentType,
  fetchEventSource,
} from '@microsoft/fetch-event-source';

const apiUrl = 'https://dashboard.laburen.com/api';
const apiKey = '<API_KEY>';
const agentId = '<agentId>';

let answer = '';
let endpointResponse = '';
const ctrl = new AbortController();

await fetchEventSource(`${apiUrl}/agents/${agentId}/query`, {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${apiKey}`,
  },
  signal: ctrl.signal,
  body: JSON.stringify({
    streaming: true,
    query: 'Hello, I need help',
    conversationId: 'optional-conversation-id',
    visitorId: 'optional-visitor-id',
  }),

  async onopen(response) {
    if (response.status === 401) {
      throw new Error('Unauthorized');
    }
    if (response.status === 402) {
      throw new Error('Usage limit exceeded');
    }
  },

  onmessage: (event) => {
    if (event.data === '[DONE]') {
      // End of stream
      ctrl.abort();

      // Parse the full response
      const fullResponse = JSON.parse(endpointResponse);
      console.log('Full response:', fullResponse);
    } else if (event.data?.startsWith('[ERROR]')) {
      console.error('Stream error:', event.data);
    } else if (event.event === 'endpoint_response') {
      endpointResponse += event.data;
    } else if (event.event === 'answer') {
      answer += event.data;
      // Update UI with partial answer
      console.log('Partial answer:', answer);
    }
  },

  onerror: (error) => {
    console.error('Connection error:', error);
  },
});

Webhook Example (Conversational Mode)

When your agent has conversational mode enabled, use webhookUrl to receive the AI response asynchronously:
cURL
curl --location --request POST 'https://dashboard.laburen.com/api/agents/<agentId>/query' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <API_KEY>' \
--data-raw '{
    "query": "Hello, I need help",
    "channel": "api",
    "webhookUrl": "https://your-server.com/webhook/laburen"
}'
Immediate Response:
{
  "status": "queued",
  "conversationId": "clxxxxxxxxxxxxxxxxx",
  "visitorId": "visitor-abc123",
  "inputMessageId": "msg_input123",
  "message": "Your message has been queued. Response will be sent to the provided webhookUrl after processing.",
  "webhookUrl": "https://your-server.com/webhook/laburen"
}
Webhook Receiver Example (Node.js/Express):
const express = require('express');
const app = express();

app.use(express.json());

app.post('/webhook/laburen', (req, res) => {
  const event = req.headers['x-laburen-event'];

  if (event === 'agent.response') {
    const { conversationId, messages, agentResponse } = req.body;

    console.log('Conversation:', conversationId);
    console.log('Messages:', messages.length);

    // Process each split message
    messages.forEach((msg, i) => {
      console.log(`Message ${i + 1}:`, msg.text);
    });

    // Full answer available in agentResponse
    console.log('Full answer:', agentResponse.answer);
    console.log('Usage:', agentResponse.usage);
  }

  res.status(200).send('OK');
});

app.listen(4000, () => {
  console.log('Webhook server listening on port 4000');
});