Skip to content

AI Assistant

HENGSHI SENSE provides an AI Assistant in the Go to Analysis feature, allowing users to obtain relevant data and charts through Q&A. Before using it, relevant configuration is required.

As shown in the figure below, the System Administrator needs to configure it on the Settings->Feature Configuration->AI Assistant page.

AI Assistant Feature Configuration

Model Provider

HENGSHI SENSE does not provide an AI assistant model; users need to apply to the model provider themselves.

Note

The API Key from the model provider needs to be applied for by the user and kept securely.

Other Vendors

On the page, you can also see that we support the following model vendors, but we currently cannot guarantee the effectiveness of these models:

Caution

The effectiveness of large models is influenced by the model vendors, and HENGSHI SENSE cannot guarantee the effectiveness of all models. If you find the performance unsatisfactory, please contact support@hengshi.com or the model vendor promptly.

OpenAI-API-Compatible

If you need to use models other than those listed above, please select the OpenAI-API-Compatible option. Any model compatible with the OpenAI API format can be used.

Taking Doubao AI as an example, you can configure it as follows:

Doubao AI Configuration

Test Model Connection

After configuring the model's API Key, click the Test Model Connection button to test whether the model connection is normal. As shown in the figure below, if the connection is normal, the return content of the model interface will be displayed.

Test Model Connection

General Model Configuration

The following configuration items are system-level configurations for the AI assistant and are not differentiated by model provider.

LLM_ANALYZE_RAW_DATA

In the page configuration, it is Allow Model to Analyze Raw Data. The purpose is to set whether the AI assistant analyzes the raw input data. If your data is sensitive, you can turn off this configuration.

LLM_ANALYZE_RAW_DATA_LIMIT

In the page configuration, it is Allowed number of raw data rows for analysis. Its function is to set a limit on the number of raw data rows for analysis. It is set based on the processing capability of the model provider, token limitations, and specific requirements.

LLM_ENABLE_SEED

In the page configuration, it is Use seed parameter. Its function is to control whether to enable a random seed when generating responses to bring diversity to the results.

LLM_API_SEED

In the page configuration, it is the seed parameter. It functions as a random seed number used when generating responses. Used in conjunction with LLM_ENABLE_SEED, it can be randomly specified by the user or kept as default.

LLM_SUGGEST_QUESTION_LOCALLY

In the page configuration, it is Do not use model to generate suggested questions. This specifies whether to use a large model when generating suggested questions.

  • true Local rules generation
  • false Large model generation

LLM_SELECT_ALL_FIELDS_THRESHOLD

In the page configuration, it is Allow Model to Analyze Metadata (Threshold). The function of this parameter is to set the threshold for selecting all fields. This parameter is only effective when LLM_SELECT_FIELDS_SHORTCUT is true, and should be set accordingly.

LLM_SELECT_FIELDS_SHORTCUT

This parameter sets whether to select all fields directly without choosing fields when there are fewer fields, to participate in generating HQL. It is used in conjunction with LLM_SELECT_ALL_FIELDS_THRESHOLD, and the configuration is determined based on specific operational scenarios. Generally, it does not need to be set to true. You can disable this configuration if you are particularly sensitive to speed or want to skip the field selection step. However, not selecting fields may affect the accuracy of the final data query.

LLM_API_SLEEP_INTERVAL

In the page configuration, it is API Call Interval (seconds). Its function is to set the sleep interval between API requests, measured in seconds. Set according to the request frequency requirements. Consider setting it for large model APIs that need to limit frequency.

HISTORY_LIMIT

In the page configuration, it refers to the number of continuous conversation context entries. It determines the number of historical conversation entries carried when interacting with the large model.

LLM_ENABLE_DRIVER

In the page configuration, it is Use Model Inference Intent. Its function is to determine whether to enable AI to judge disabled questions and optimize questions based on context. The context range is determined by HISTORY_LIMIT. It generally needs to be enabled only when there is a need for context reference and disabled questions.

MAX_ITERATIONS

In the page configuration, it is the Maximum Iterations for Model Inference. It serves as the maximum number of iterations used to control the number of times a large model failure loop is processed.

LLM_API_REQUIRE_JSON_RESP

Determine whether the API response format must be JSON. This configuration item is only supported by OpenAI and generally does not need to be changed.

LLM_HQL_USE_MULTI_STEPS

Whether to optimize the instruction adherence for trend, year-on-year, and month-on-month type questions through multiple steps. Set as appropriate, multiple steps may be relatively slower.

VECTOR_SEARCH_FIELD_VALUE_NUM_LIMIT

The upper limit on the number of distinct values for tokenized search dataset fields. Portions with too many distinct value matches will not be extracted. Set as appropriate.

CHAT_BEGIN_WITH_SUGGEST_QUESTION

After jumping to analysis, will several suggested questions be provided to the user? Enable as needed.

CHAT_END_WITH_SUGGEST_QUESTION

After each question round, whether to provide the user with several suggested questions. Enable as needed. Disabling can save some time.

Vector Library Configuration

ENABLE_VECTOR

Enable the vector search feature. The AI assistant uses the large model API to select the most relevant examples to the question. Once vector search is enabled, the AI assistant will integrate the results from both the large model API and the vector search.

VECTOR_MODEL

Vectorized model, set based on whether vector search capability is needed. It needs to be used in conjunction with VECTOR_ENDPOINT. The model included in the system's built-in vector service is intfloat/multilingual-e5-base. This model does not need to be downloaded. If other models are needed, it currently supports selecting vector models on Hugging Face. It should be noted that the vector service must ensure connectivity to the Hugging Face website, otherwise the model download will fail.

VECTOR_ENDPOINT

Vectorization API address, set based on whether vector search capability is needed. After installing the relevant vector database services, it defaults to the built-in vector service.

VECTOR_SEARCH_RELATIVE_FUNCTIONS

Whether to search for function descriptions related to the issue. When enabled, it will search for function descriptions related to the issue, and accordingly, the prompt words will become larger. This switch only takes effect when ENABLE_VECTOR is enabled.

For detailed vector library configuration, see: AI Configuration

HENGSHI SENSE Platform User Manual