getting error after first chat in LMStudio - ""Cannot apply filter "length" to type: UndefinedValue"." jinja template issue

#12
by mayankiit04 - opened

This is the error that i get when asking follow up question on chat when running Q4_1 or any quantized version of the model on my local setup

"Error rendering prompt with jinja template: "Cannot apply filter "length" to type: UndefinedValue".

This is usually an issue with the model's prompt template. If you are using a popular model, you can try to search the model under lmstudio-community, which will have fixed prompt templates. If you cannot find one, you are welcome to post this issue to our discord or issue tracker on GitHub. Alternatively, if you know how to write jinja templates, you can override the prompt template in My Models > model settings > Prompt Template."

This is fixed once all the chats are cleared off.

mayankiit04 changed discussion title from getting error after first chat - ""Cannot apply filter "length" to type: UndefinedValue"." jinja template issue to getting error after first chat in LMStudio - ""Cannot apply filter "length" to type: UndefinedValue"." jinja template issue

Hello, after describing the issue to Chat GPT it explained the cause and provided, hopefully, a fixed chat template at the end that seems to work for me:

That error in LM Studio:

Cannot apply filter "length" to type: UndefinedValue

means some variable (like tools, messages, or add_generation_prompt) is undefined, and the Jinja template tries to do something like tools|length or messages|selectattr(...), which fails because the variable doesn’t exist at render time.

🧠 Why it happens

LM Studio doesn’t always pass every variable that Hugging Face templates expect — especially tools, messages, or add_generation_prompt.
When the model template does something like:

{%- if tools is not none and tools|length > 0 -%}


but tools is undefined, Jinja can’t compute tools|length, and you get exactly this error.

✅ The simplest fix

You can make the template safe by defining defaults for possibly missing variables at the top.

🔧 Patch to apply (top of your template)

Add this block right after the first line:

{# --- FIX FOR LM STUDIO UNDEFINED VARIABLES --- #}
{%- set messages = messages or [] -%}
{%- set tools = tools or [] -%}
{%- set add_generation_prompt = add_generation_prompt or false -%}
{%- set add_thoughts = add_thoughts or false -%}
{%- set bos_token = bos_token or '' -%}
{%- set eos_token = eos_token or '' -%}
{# ------------------------------------------------ #}


This ensures all variables exist and have default values.
Then the rest of the template can safely run, even if LM Studio didn’t provide some.

The full chat template suggested by ChatGPT:

{# ------------------------------------------------------------ #}
{#  FIX FOR LM STUDIO UNDEFINED VARIABLES                      #}
{# ------------------------------------------------------------ #}
{%- set messages = messages or [] -%}
{%- set tools = tools or [] -%}
{%- set add_generation_prompt = add_generation_prompt or false -%}
{%- set add_thoughts = add_thoughts or false -%}
{%- set bos_token = bos_token or '' -%}
{%- set eos_token = eos_token or '' -%}
{%- set tool_output_format = tool_output_format or "default" -%}
{%- set available_tools_string, thought_instructions, add_tool_id = '', '', true -%}
{# ------------------------------------------------------------ #}

{%- if tools is defined and tools and tools|length > 0 -%}
    {%- set available_tools_string -%}
You are provided with function signatures within <available_tools></available_tools> XML tags. 
You may call one or more functions to assist with the user query. 
Don't make assumptions about the arguments. 
You should infer the argument values from previous user responses and the system message. 
Here are the available tools:
<available_tools>
{% for tool in tools %}
{{ tool|string }}
{% endfor %}
</available_tools>
    {%- endset -%}
{%- endif -%}

{%- if tool_output_format is none or tool_output_format == "default" -%}
    {%- set tool_output_instructions -%}
Return all function calls as a list of json objects within <tool_call></tool_call> XML tags. 
Each json object should contain a function name and arguments as follows:
<tool_calls>[{"name": <function-name-1>, "arguments": <args-dict-1>}, {"name": <function-name-2>, "arguments": <args-dict-2>}, ...]</tool_calls>
    {%- endset -%}
{%- elif tool_output_format == "yaml" -%}
    {%- set tool_output_instructions -%}
Return all function calls as a list of yaml objects within <tool_call></tool_call> XML tags. 
Each yaml object should contain a function name and arguments as follows:
<tool_calls>
- name: <function-name-1>
  arguments: <args-dict-1>
- name: <function-name-2>
  arguments: <args-dict-2>
...
</tool_calls>
    {%- endset -%}
{%- endif -%}

{%- if add_thoughts -%}
    {%- set thought_instructions -%}
Prior to generating the function calls, you should generate the reasoning for why you're calling the function. 
Please generate these reasoning thoughts between <thinking> and </thinking> XML tags.
    {%- endset -%}
{%- endif -%}

{{- bos_token -}}

{%- set reasoning_prompt = 'You are a thoughtful and systematic AI assistant built by ServiceNow Language Models (SLAM) lab. Before providing an answer, analyze the problem carefully and present your reasoning step by step. After explaining your thought process, provide the final solution in the following format: [BEGIN FINAL RESPONSE] ... [END FINAL RESPONSE].' -%}

{%- if messages and messages[0]['role'] != 'system' and tools and tools|length > 0 -%}
    {{- '<|system|>\n' + reasoning_prompt + available_tools_string + "\n" + tool_output_instructions + '\n<|end|>\n' -}}
{%- endif -%}

{%- if not (messages|selectattr('role','equalto','system')|list|length > 0) -%}
    {{- '<|system|>\n' + reasoning_prompt + '\n<|end|>\n' -}}
{%- endif -%}

{%- for message in messages -%}
    {%- if message['role'] == 'user' -%}
        {{- '<|user|>\n' -}}
        {%- if message['content'] is not string %}
            {%- for chunk in message['content'] %}
                {%- if chunk['type'] == 'text' %}
                    {{- chunk['text'] }}
                {%- elif chunk['type'] in ['image', 'image_url'] %}
                    [IMG]
                {%- endif -%}
            {%- endfor -%}
        {%- else %}
            {{- message['content'] }}
        {%- endif %}
        {{- '\n<|end|>\n' -}}
    {%- elif message['role'] == 'content' -%}
        {{- '<|content|>\n' + (message['content'][0]['text'] if message['content'] is not string else message['content']) + '\n<|end|>\n' -}}
    {%- elif message['role'] == 'system' -%}
        {%- set system_message = '' -%}
        {%- if message['content'] %}
            {%- if message['content'] is string %}
                {%- set system_message = message['content'] %}
            {%- else %}
                {%- set system_message = message['content'][0]['text'] %}
            {%- endif %}
        {%- endif -%}
        {%- if tools and tools|length > 0 -%}
            {{- '<|system|>\n' + reasoning_prompt + system_message + '\n' + available_tools_string + '\n<|end|>\n' -}}
        {%- else -%}
            {{- '<|system|>\n' + reasoning_prompt + system_message + '\n<|end|>\n' -}}
        {%- endif -%}
    {%- elif message['role'] == 'assistant' -%}
        {%- if loop.last -%}{%- set add_tool_id = false -%}{%- endif -%}
        {{- '<|assistant|>\n' -}}
        {%- if message['content'] %}
            {%- if message['content'] is not string and message['content'][0]['text'] is not none -%}
                {{- message['content'][0]['text'] -}}
            {%- else -%}
                {{- message['content'] -}}
            {%- endif -%}
        {%- elif message['chosen'] %}
            {{- message['chosen'][0] -}}
        {%- endif -%}
        {%- if add_thoughts and 'thought' in message and message['thought'] -%}
            {{- '<thinking>' + message['thought'] + '</thinking>' -}}
        {%- endif -%}
        {%- if message['tool_calls'] %}
            {{- '\n<tool_calls>[' -}}
            {%- for tool_call in message['tool_calls'] -%}
                {{- '{"name": "' + tool_call['function']['name'] + '", "arguments": ' + tool_call['function']['arguments']|string -}}
                {%- if add_tool_id -%}
                    {{- ', "id": "' + tool_call['id'] + '"' -}}
                {%- endif -%}
                {{- '}' -}}
                {%- if not loop.last -%}{{- ', ' -}}{%- endif -%}
            {%- endfor -%}
            {{- ']</tool_calls>' -}}
        {%- endif -%}
        {{- '\n<|end|>\n' + eos_token -}}
    {%- elif message['role'] == 'tool' -%}
        {%- set tool_message = message['content'] if message['content'] is string else message['content'][0]['text'] -%}
        {{- '<|tool_result|>\n' + tool_message|string + '\n<|end|>\n' -}}
    {%- endif -%}

    {%- if loop.last and add_generation_prompt and message['role'] != 'assistant' -%}
        {{- '<|assistant|>\n' -}}
    {%- endif -%}
{%- endfor -%}

Sign up or log in to comment