summaryrefslogtreecommitdiff
path: root/collaborativeagents/slurm/logs/run_expts_a100_14355851.err
blob: 59bbe1a4a95208e3967d3eb1d95b8551d8a0e947 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
2025-12-25 07:58:42,438 - INFO - Loaded dataset: math-500
2025-12-25 07:58:42,447 - INFO - Loaded 100 profiles from ../data/complex_profiles_v2/profiles_100.jsonl
2025-12-25 07:58:42,448 - INFO - Running method: rag_vector
2025-12-25 07:58:42,448 - INFO -   Profile 1/2
2025-12-25 07:58:47,959 - ERROR - Error in session: ConversationGenerator.__init__() got an unexpected keyword argument 'user_model'
2025-12-25 07:58:47,959 - ERROR - Error in session: ConversationGenerator.__init__() got an unexpected keyword argument 'user_model'
2025-12-25 07:58:47,960 - INFO -   Profile 2/2
2025-12-25 07:58:47,960 - ERROR - Error in session: ConversationGenerator.__init__() got an unexpected keyword argument 'user_model'
2025-12-25 07:58:47,960 - ERROR - Error in session: ConversationGenerator.__init__() got an unexpected keyword argument 'user_model'
2025-12-25 07:58:47,962 - WARNING - No values for metric task_success_rate, skipping comparison
2025-12-25 07:58:47,962 - WARNING - No values for metric avg_user_tokens, skipping comparison
2025-12-25 07:58:47,962 - WARNING - No values for metric avg_total_tokens, skipping comparison
2025-12-25 07:58:47,962 - WARNING - No values for metric avg_enforcement_count, skipping comparison
2025-12-25 07:58:47,962 - WARNING - No values for metric avg_preference_compliance, skipping comparison
2025-12-25 07:58:47,962 - WARNING - No values for metric conflict_resolution_accuracy, skipping comparison
2025-12-25 07:58:47,962 - WARNING - No values for metric over_personalization_rate, skipping comparison
Traceback (most recent call last):
  File "/projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/scripts/run_experiments.py", line 491, in <module>
    main()
  File "/projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/scripts/run_experiments.py", line 479, in main
    analysis = runner.run_all()
               ^^^^^^^^^^^^^^^^
  File "/projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/scripts/run_experiments.py", line 299, in run_all
    self._generate_report(analysis)
  File "/projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/scripts/run_experiments.py", line 414, in _generate_report
    best = analysis["comparison"][metric_key]["best_method"]
           ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'task_success_rate'