muzakkirhussain011 commited on
Commit
45da28d
Β·
1 Parent(s): d1dc22d

Add application files

Browse files
Files changed (2) hide show
  1. GRADIO_5_FIX.md +152 -0
  2. app.py +27 -14
GRADIO_5_FIX.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Gradio 5.x Message Format Fix
2
+
3
+ ## Issue
4
+ ```
5
+ gradio.exceptions.Error: "Data incompatible with messages format. Each message should be a dictionary with 'role' and 'content' keys or a ChatMessage object."
6
+ ```
7
+
8
+ ## Root Cause
9
+ Gradio 5.x changed the chatbot message format from tuples to dictionaries.
10
+
11
+ ### Old Format (Gradio 4.x):
12
+ ```python
13
+ chat_history.append((None, "Assistant message"))
14
+ chat_history.append(("User message", "Assistant response"))
15
+ ```
16
+
17
+ ### New Format (Gradio 5.x):
18
+ ```python
19
+ chat_history.append({"role": "assistant", "content": "Assistant message"})
20
+ chat_history.append({"role": "user", "content": "User message"})
21
+ chat_history.append({"role": "assistant", "content": "Assistant response"})
22
+ ```
23
+
24
+ ## Fixed Files
25
+ - `app.py` - Updated all message additions to use dictionary format
26
+
27
+ ## Changes Made
28
+
29
+ ### 1. Initial Message
30
+ **Before:**
31
+ ```python
32
+ chat_history.append((None, "Starting pipeline..."))
33
+ ```
34
+
35
+ **After:**
36
+ ```python
37
+ chat_history.append({
38
+ "role": "assistant",
39
+ "content": "Starting pipeline..."
40
+ })
41
+ ```
42
+
43
+ ### 2. Company Processing Messages
44
+ **Before:**
45
+ ```python
46
+ chat_history.append((
47
+ f"Process {company}",
48
+ f"Processing: {company}"
49
+ ))
50
+ ```
51
+
52
+ **After:**
53
+ ```python
54
+ chat_history.append({
55
+ "role": "assistant",
56
+ "content": f"Processing: {company}",
57
+ "metadata": {"company": company} # For tracking updates
58
+ })
59
+ ```
60
+
61
+ ### 3. Updating Streaming Messages
62
+ **Before:**
63
+ ```python
64
+ if chat_history and chat_history[-1][0] == f"Process {company}":
65
+ chat_history[-1] = (f"Process {company}", updated_content)
66
+ ```
67
+
68
+ **After:**
69
+ ```python
70
+ if chat_history and chat_history[-1].get("metadata", {}).get("company") == company:
71
+ chat_history[-1]["content"] = updated_content
72
+ ```
73
+
74
+ ### 4. Error/Status Messages
75
+ **Before:**
76
+ ```python
77
+ chat_history.append((None, "Error occurred"))
78
+ ```
79
+
80
+ **After:**
81
+ ```python
82
+ chat_history.append({
83
+ "role": "assistant",
84
+ "content": "Error occurred"
85
+ })
86
+ ```
87
+
88
+ ## Message Roles
89
+
90
+ Gradio 5.x supports these roles:
91
+ - `"user"` - User messages
92
+ - `"assistant"` - Assistant/bot messages
93
+ - `"system"` - System messages (optional)
94
+
95
+ For this application, all messages are from the assistant role since it's a pipeline monitoring interface.
96
+
97
+ ## Testing
98
+
99
+ After the fix, test with:
100
+ ```bash
101
+ python app.py
102
+ # Enter a company name and run the pipeline
103
+ ```
104
+
105
+ Expected: Messages should display properly without format errors.
106
+
107
+ ## Chatbot Configuration
108
+
109
+ The chatbot component is configured as:
110
+ ```python
111
+ gr.Chatbot(
112
+ label="Agent Output & Generated Content",
113
+ height=600,
114
+ type="messages" # This enables the new format
115
+ )
116
+ ```
117
+
118
+ ## Additional Notes
119
+
120
+ ### Metadata Field
121
+ We added a `metadata` field to track which company each message belongs to:
122
+ ```python
123
+ {
124
+ "role": "assistant",
125
+ "content": "...",
126
+ "metadata": {"company": "Shopify"}
127
+ }
128
+ ```
129
+
130
+ This allows us to update the correct message when streaming tokens for a specific company.
131
+
132
+ ### Streaming Updates
133
+ For real-time token streaming, we update the message in place:
134
+ ```python
135
+ # Find the message for this company
136
+ if chat_history[-1].get("metadata", {}).get("company") == company:
137
+ # Update its content
138
+ chat_history[-1]["content"] = new_content
139
+ ```
140
+
141
+ ## Migration Checklist
142
+
143
+ If you have other Gradio interfaces, check for:
144
+ - [ ] All `chat_history.append()` use dictionary format
145
+ - [ ] Chatbot component has `type="messages"`
146
+ - [ ] Message updates modify the `"content"` key, not tuple indices
147
+ - [ ] All roles are valid: "user", "assistant", or "system"
148
+
149
+ ## Resources
150
+
151
+ - [Gradio 5.0 Migration Guide](https://www.gradio.app/guides/migration-guide-5)
152
+ - [Chatbot Component Docs](https://www.gradio.app/docs/chatbot)
app.py CHANGED
@@ -63,12 +63,15 @@ async def run_pipeline_gradio(company_names_input: str) -> AsyncGenerator[tuple,
63
  # Fallback to example companies
64
  company_names = ["Shopify", "Stripe"]
65
 
66
- # Chat history for display
67
  chat_history = []
68
  workflow_logs = []
69
 
70
  # Start pipeline message
71
- chat_history.append((None, f"πŸš€ **Starting Dynamic CX Agent Pipeline...**\n\nDiscovering and processing {len(company_names)} companies:\n- " + "\n- ".join(company_names) + "\n\nUsing web search to find live data..."))
 
 
 
72
  yield chat_history, "Initializing pipeline...", format_workflow_logs(workflow_logs)
73
 
74
  try:
@@ -133,10 +136,11 @@ async def run_pipeline_gradio(company_names_input: str) -> AsyncGenerator[tuple,
133
  })
134
 
135
  # Add company section to chat
136
- chat_history.append((
137
- f"Process {company}",
138
- f"🏒 **{company}**\n\n*Industry:* {industry}\n*Size:* {size} employees\n\nGenerating personalized content..."
139
- ))
 
140
  status = f"🏒 Processing {company}"
141
 
142
  elif event_type == "llm_token":
@@ -164,9 +168,9 @@ async def run_pipeline_gradio(company_names_input: str) -> AsyncGenerator[tuple,
164
  if email:
165
  content += f"**βœ‰οΈ Email Draft:**\n{email}"
166
 
167
- # Update last message
168
- if chat_history and chat_history[-1][0] == f"Process {company}":
169
- chat_history[-1] = (f"Process {company}", content)
170
 
171
  status = f"✍️ Writing content for {company}..."
172
 
@@ -185,8 +189,8 @@ async def run_pipeline_gradio(company_names_input: str) -> AsyncGenerator[tuple,
185
  content += str(email)
186
 
187
  # Update last message with final content
188
- if chat_history and chat_history[-1][0] == f"Process {company}":
189
- chat_history[-1] = (f"Process {company}", content)
190
 
191
  workflow_logs.append({
192
  "time": timestamp,
@@ -204,7 +208,10 @@ async def run_pipeline_gradio(company_names_input: str) -> AsyncGenerator[tuple,
204
  "action": "❌ Blocked",
205
  "details": reason
206
  })
207
- chat_history.append((None, f"❌ **Compliance Block**: {reason}"))
 
 
 
208
  status = f"❌ Blocked: {reason}"
209
 
210
  elif event_type == "policy_pass":
@@ -231,12 +238,18 @@ async def run_pipeline_gradio(company_names_input: str) -> AsyncGenerator[tuple,
231
 
232
  All prospects have been enriched, scored, and prepared for outreach through the autonomous agent system.
233
  """
234
- chat_history.append((None, final_msg))
 
 
 
235
  yield chat_history, "βœ… Pipeline Complete", format_workflow_logs(workflow_logs)
236
 
237
  except Exception as e:
238
  error_msg = f"❌ **Pipeline Error:** {str(e)}"
239
- chat_history.append((None, error_msg))
 
 
 
240
  yield chat_history, f"Error: {str(e)}", format_workflow_logs(workflow_logs)
241
 
242
  finally:
 
63
  # Fallback to example companies
64
  company_names = ["Shopify", "Stripe"]
65
 
66
+ # Chat history for display (Gradio 5.x format)
67
  chat_history = []
68
  workflow_logs = []
69
 
70
  # Start pipeline message
71
+ chat_history.append({
72
+ "role": "assistant",
73
+ "content": f"πŸš€ **Starting Dynamic CX Agent Pipeline...**\n\nDiscovering and processing {len(company_names)} companies:\n- " + "\n- ".join(company_names) + "\n\nUsing web search to find live data..."
74
+ })
75
  yield chat_history, "Initializing pipeline...", format_workflow_logs(workflow_logs)
76
 
77
  try:
 
136
  })
137
 
138
  # Add company section to chat
139
+ chat_history.append({
140
+ "role": "assistant",
141
+ "content": f"🏒 **{company}**\n\n*Industry:* {industry}\n*Size:* {size} employees\n\nGenerating personalized content...",
142
+ "metadata": {"company": company} # Track which company this is
143
+ })
144
  status = f"🏒 Processing {company}"
145
 
146
  elif event_type == "llm_token":
 
168
  if email:
169
  content += f"**βœ‰οΈ Email Draft:**\n{email}"
170
 
171
+ # Update last message if it's for this company
172
+ if chat_history and chat_history[-1].get("metadata", {}).get("company") == company:
173
+ chat_history[-1]["content"] = content
174
 
175
  status = f"✍️ Writing content for {company}..."
176
 
 
189
  content += str(email)
190
 
191
  # Update last message with final content
192
+ if chat_history and chat_history[-1].get("metadata", {}).get("company") == company:
193
+ chat_history[-1]["content"] = content
194
 
195
  workflow_logs.append({
196
  "time": timestamp,
 
208
  "action": "❌ Blocked",
209
  "details": reason
210
  })
211
+ chat_history.append({
212
+ "role": "assistant",
213
+ "content": f"❌ **Compliance Block**: {reason}"
214
+ })
215
  status = f"❌ Blocked: {reason}"
216
 
217
  elif event_type == "policy_pass":
 
238
 
239
  All prospects have been enriched, scored, and prepared for outreach through the autonomous agent system.
240
  """
241
+ chat_history.append({
242
+ "role": "assistant",
243
+ "content": final_msg
244
+ })
245
  yield chat_history, "βœ… Pipeline Complete", format_workflow_logs(workflow_logs)
246
 
247
  except Exception as e:
248
  error_msg = f"❌ **Pipeline Error:** {str(e)}"
249
+ chat_history.append({
250
+ "role": "assistant",
251
+ "content": error_msg
252
+ })
253
  yield chat_history, f"Error: {str(e)}", format_workflow_logs(workflow_logs)
254
 
255
  finally: