diff options
| author | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-27 09:57:37 -0600 |
|---|---|---|
| committer | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-27 09:57:37 -0600 |
| commit | dc801c07cf38b0c495686463e6ca6f871a64440e (patch) | |
| tree | 599f03114775921dbc472403c701f4a3a8ea188a /collaborativeagents/scripts/test_vllm_adapter.sh | |
| parent | e43b3f8aa36c198b95c1e46bea2eaf3893b13dc3 (diff) | |
Add collaborativeagents module and update gitignore
- Add collaborativeagents subproject with adapters, agents, and evaluation modules
- Update .gitignore to exclude large binary files (.whl, .tar), wandb logs, and results
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Diffstat (limited to 'collaborativeagents/scripts/test_vllm_adapter.sh')
| -rwxr-xr-x | collaborativeagents/scripts/test_vllm_adapter.sh | 74 |
1 files changed, 74 insertions, 0 deletions
diff --git a/collaborativeagents/scripts/test_vllm_adapter.sh b/collaborativeagents/scripts/test_vllm_adapter.sh new file mode 100755 index 0000000..af22667 --- /dev/null +++ b/collaborativeagents/scripts/test_vllm_adapter.sh @@ -0,0 +1,74 @@ +#!/bin/bash +# Test vLLM with 45% memory + ContextualAdapter loading + +set -e +cd /projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents +source /u/yurenh2/miniforge3/etc/profile.d/conda.sh +conda activate eval + +export HF_HOME=/projects/bfqt/users/yurenh2/hf_cache/huggingface +export PYTHONPATH="${PWD}:${PWD}/scripts:${PWD}/../src:${PYTHONPATH}" + +MODEL_8B="/projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/models/llama-3.1-8b-instruct" + +echo "=== Testing vLLM 45% memory + Adapter ===" +echo "GPUs available:" +nvidia-smi --query-gpu=index,name,memory.total --format=csv + +# Kill any existing vLLM +pkill -f "vllm.entrypoints" 2>/dev/null || true +sleep 2 + +echo "" +echo "Memory before vLLM:" +nvidia-smi --query-gpu=index,memory.used --format=csv + +echo "" +echo "Starting vLLM with 45% memory on GPU 0,1..." +CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server \ + --model $MODEL_8B --port 8004 --tensor-parallel-size 2 \ + --gpu-memory-utilization 0.45 --max-model-len 8192 \ + --disable-log-requests --dtype bfloat16 & + +VLLM_PID=$! +echo "vLLM PID: $VLLM_PID" + +echo "Waiting for vLLM to start..." +for i in $(seq 1 60); do + if curl -s http://localhost:8004/health > /dev/null 2>&1; then + echo "vLLM ready after $((i*2))s" + break + fi + sleep 2 +done + +echo "" +echo "Memory after vLLM started:" +nvidia-smi --query-gpu=index,memory.used --format=csv + +echo "" +echo "Testing ContextualAdapter loading..." +python -c " +import sys +sys.path.insert(0, 'collaborativeagents') +sys.path.insert(0, 'src') + +from adapters.contextual_adapter import ContextualAdapter +print('Creating ContextualAdapter...') +adapter = ContextualAdapter() +print('Initializing (loading model)...') +adapter.initialize() +print('Testing generation...') +adapter.start_session('test') +result = adapter.generate_response('What is 2+2?') +print(f'Response: {result[\"response\"][:100]}') +print('SUCCESS: ContextualAdapter works with vLLM running!') +" + +echo "" +echo "Final memory usage:" +nvidia-smi --query-gpu=index,memory.used --format=csv + +# Cleanup +pkill -f "vllm.entrypoints" 2>/dev/null || true +echo "Test complete!" |
