Configure LLM model
Witty needs a valid LLM model. This section will describe how to add a LLM configuration.
LLM structure
A LLM model is composed of these fields:
provider: LLM provider;api_key: api key of the provider;endpoint: URL where the LLM model is located;api_version: api version defined by the provider;model: LLM model name;deployment: deployment name. It could be different frommodel;custom_header: an optional field to include the api key in a custom header to be sent to the provider.
Here's an example of a LLM configuration:
{
"provider": "azure_openai",
"api_key": "xxx",
"endpoint": "https://xxx.cognitiveservices.azure.com/",
"api_version": "2025-01-01-preview",
"model": "gpt-4.1",
"deployment": "gpt-4.1",
"custom_header": "X-API-Key"
}
Interacting with LLM
Currently, there are the following APIs to interact with LLM configuration:
- GET /witty/v1/llm/config: retrieve the LLM configuration;
- POST /witty/v1/llm/config: create/edit a LLM configuration. The body is a JSON in the LLM structure seen before;
- POST /witty/v1/llm/chat: chat with LLM. The body is a JSON with this format
{
"query": "Some text"
}
Supported providers
Currently the available providers for LLM configuration are:
| LLM Provider | Description |
|---|---|
| azure_openai | Azure OpenAI Service |
Note: Azure OpenAI can be accessed through Azure Foundry and Azure API Management (APIM). In the first case, the
endpointfield should be the Azure Foundry endpoint, while in the second case it should be the Azure API Management endpoint. If you are using Azure API Management, you can also include thecustom_headerfield with the name of the header where the APIM subscription key is expected (e.g.X-API-Key).
Supported models
Currently Witty microservice has been tested against the following models/providers:
| LLM Provider | Model |
|---|---|
| azure_openai | gpt-5.2-chat (preferred) |
| azure_openai | gpt-4.1 |
| azure_openai | gpt-5 |
| azure_openai | gpt-4.5-preview |
| azure_openai | gpt-4o |
| azure_openai | o1 |