AI Assistant
The Model Providers section introduces the model providers supported by the AI Data Query Assistant. This document mainly describes the system-level configuration items for the AI Data Query Assistant
.
General Model Configuration
The following configuration items are system-level settings for the AI Assistant and are not specific to any model provider.
LLM_ANALYZE_RAW_DATA
In the page settings, this is "Allow the model to analyze raw data." Its function is to determine whether the AI Assistant analyzes the original input data. If your data is sensitive, you can disable this setting.
LLM_ANALYZE_RAW_DATA_LIMIT
In the page configuration, this refers to the Allowed number of raw data rows for analysis
. Its function is to set a limit on the number of raw data rows that can be analyzed. The setting should be determined based on the processing capabilities of the model provider, token limitations, and specific requirements.
LLM_ENABLE_SEED
In the page configuration, this is "Use seed parameter". Its function is to control whether to enable a random seed when generating responses, in order to introduce diversity in the results.
LLM_API_SEED
In the page configuration, this refers to the seed parameter
. Its function is to use a random seed number when generating responses. Used in conjunction with LLM_ENABLE_SEED
, it can be randomly specified by the user or kept as default.
LLM_SUGGEST_QUESTION_LOCALLY
In the page configuration, this means Do not use the model to generate suggested questions
. It specifies whether to use a large language model when generating suggested questions.
- true: Generated by local rules
- false: Generated by a large language model
LLM_SELECT_ALL_FIELDS_THRESHOLD
In the page configuration, this is labeled as Allow Model to Analyze Metadata (Threshold)
. This parameter sets the threshold for selecting all fields. It only takes effect when LLM_SELECT_FIELDS_SHORTCUT
is set to true. Adjust as needed.
LLM_SELECT_FIELDS_SHORTCUT
This parameter determines whether to skip field selection and directly select all fields to generate HQL when there are only a few fields. It is used in conjunction with LLM_SELECT_ALL_FIELDS_THRESHOLD
, and should be configured based on specific operational scenarios. Generally, it does not need to be set to true. You can disable this configuration if you are particularly sensitive to speed or want to skip the field selection step. However, not selecting fields may affect the accuracy of the final data query.
LLM_API_SLEEP_INTERVAL
In the page configuration, this is labeled as API Call Interval (seconds)
. Its function is to set the sleep interval between API requests, measured in seconds. Adjust this according to the required request frequency. For large model APIs that require rate limiting, consider setting this value.
HISTORY_LIMIT
In the page configuration, this refers to the Number of Consecutive Conversation Contexts
. It determines the number of historical conversation entries carried when interacting with the large model.
LLM_ENABLE_DRIVER
In the page configuration, this is labeled as Use Model to Infer Intent
. Its function is to enable or disable AI-based judgment for disabling questions and to optimize question functionality based on context. The context range is determined by HISTORY_LIMIT
. Generally, this should be enabled only when context referencing and the need to disable questions are required.
MAX_ITERATIONS
In the page configuration, this refers to the Maximum Model Inference Iterations
. It sets the maximum number of iterations and is used to control the number of times the system loops when handling failures with large models.
LLM_API_REQUIRE_JSON_RESP
Determines whether the API response format must be JSON. This configuration item is only supported by OpenAI and generally does not need to be changed.
LLM_HQL_USE_MULTI_STEPS
Whether to optimize the instruction adherence for trend and year-over-year/month-over-month type questions through multiple steps. Set as appropriate; using multiple steps may be relatively slower.
VECTOR_SEARCH_FIELD_VALUE_NUM_LIMIT
The upper limit for the number of distinct values in a dataset field for tokenized search. Any distinct values exceeding this limit will not be extracted. Set this value as appropriate.
CHAT_BEGIN_WITH_SUGGEST_QUESTION
After navigating to analysis, will the user be provided with several suggested questions? Enable this feature as needed.
CHAT_END_WITH_SUGGEST_QUESTION
After each question round is answered, whether to provide the user with several suggested questions. Enable as needed. Disabling this option can save some time.
Vector Database Configuration
ENABLE_VECTOR
Enable vector search functionality. The AI Assistant uses the large model API to select the examples most relevant to the question. Once vector search is enabled, the AI Assistant will combine the results from both the large model API and vector search.
VECTOR_MODEL
Vectorization model, set based on whether vector search capability is required. Needs to be used together with VECTOR_ENDPOINT
. The built-in vector service already includes the model intfloat/multilingual-e5-base
. This model does not require downloading. If other models are needed, currently it supports selecting vector models from Hugging Face. Note that the vector service must be able to connect to the official Hugging Face website; otherwise, model downloads will fail.
VECTOR_ENDPOINT
Vectorization API address, configured based on whether vector search capabilities are required. After installing the relevant vector database services, it defaults to the built-in vector service.
VECTOR_SEARCH_RELATIVE_FUNCTIONS
Whether to search for function descriptions related to the question. When enabled, it will search for function descriptions relevant to the question, and accordingly, the prompt will become larger. This switch only takes effect when ENABLE_VECTOR
is enabled.
For detailed vector library configuration, see: AI Configuration
Console
In the console, we publicly display the workflow of HENGSHI ChatBI, where each node is an editable prompt. You can also interact directly with the AI assistant on this page, making troubleshooting more convenient.
Enhancement
Editing prompts requires a certain understanding of large language models. It is recommended that this operation be performed by a system administrator.
UserSystem Prompt
Large language models possess most general knowledge, but for specific business domains, industry jargon, or proprietary knowledge, prompts are needed to enhance the model's understanding.
For example, in the e-commerce sector, terms like "major promotion" and "best-seller" may not have clear meanings to the model. By providing prompts, you can improve the model's comprehension of these terms.
Typically, a large language model would interpret "major promotion" as a large-scale promotional event conducted by merchants or platforms within a specific time frame. Such events are usually concentrated around shopping festivals, holidays, or themed days, such as "Double 11" or "618".
If you want the model to accurately understand the meaning of "major promotion," you can explicitly state in the prompt that it refers to events like Double 11, etc.
Conclusion Prompt
After the AI assistant retrieves data based on the user's question, it will prompt the large language model to summarize the query results according to this prompt in order to answer the question.
The system's default summary prompt is relatively basic. You can modify it to better fit your actual business scenarios and needs, making it more closely aligned with your company's operations.
SuggestQuestions Prompt
The system's default suggested question prompts are relatively basic. You can modify them according to your business needs to create question recommendation logic that is more closely aligned with real-world scenarios and specifically relevant to your company's business.