You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I've tried your GRPO Reasoning trainer with Qwen2.5 3B Instruct:
I followed your inference with/without lora, And it seemed like an 'Aha moment' after training for just 250 steps with GRPO trainer. BUT, I noticed that there's a difference between the base model and the reasoning model.
In reasoning model, we add this system prompt, and in base model, we don't have such constraints.
So I' tried to ablate the system prompt, to see whether it influences the model. The results suggests that the base model will also try to "reason" even if without RL training. So I'm not sure if "Aha moment" really appears in this experiment.
The base model:
wo system_prompt (Wrong):
'There are no letters 'r' in the word "strawberry."'
with system_prompt(Correct):
'\nTo find out how many times the letter 'r' appears in the word "strawberry," we should go through the word character by character and count each occurrence of 'r'.\n\n- The word "strawberry" contains 'r' three times.\n- First occurrence: The 'r' in the third position.\n- Second occurrence: The 'r' in the seventh position.\n- Third occurrence: The 'r' in the ninth position.\n\n\n...\n\nThere are 3 'r's in the word "strawberry."\n'
The reasoning model:
wo system_prompt (Wrong):
'There are two 'r's in the word "strawberry."'
with system_prompt(Correct):
'\nTo find out how many times the letter 'r' appears in the word "strawberry", I will go through the word character by character and count each occurrence of 'r'.\n\n1. s\n2. t\n3. r\n4. a\n5. w\n6. b\n7. a\n8. r\n9. r\n\nHere, the letter 'r' appears three times in the word "strawberry".\n\n\nThe letter 'r' appears 3 times in the word "strawberry".\n'
It seemed that the System Prompt is the key to reasoning and could lead to correct answer, rather than the GRPO training. But there's one thing I might need to try, I only trained RL for 250 steps. I' ll try to train for 2500 steps to see if the real 'Aha moment' comes out.
The text was updated successfully, but these errors were encountered:
Hi, I've tried your GRPO Reasoning trainer with Qwen2.5 3B Instruct:
I followed your inference with/without lora, And it seemed like an 'Aha moment' after training for just 250 steps with GRPO trainer. BUT, I noticed that there's a difference between the base model and the reasoning model.
In reasoning model, we add this system prompt, and in base model, we don't have such constraints.
So I' tried to ablate the system prompt, to see whether it influences the model. The results suggests that the base model will also try to "reason" even if without RL training. So I'm not sure if "Aha moment" really appears in this experiment.
The base model:
wo system_prompt (Wrong):
'There are no letters 'r' in the word "strawberry."'
with system_prompt(Correct):
'\nTo find out how many times the letter 'r' appears in the word "strawberry," we should go through the word character by character and count each occurrence of 'r'.\n\n- The word "strawberry" contains 'r' three times.\n- First occurrence: The 'r' in the third position.\n- Second occurrence: The 'r' in the seventh position.\n- Third occurrence: The 'r' in the ninth position.\n\n\n...\n\nThere are 3 'r's in the word "strawberry."\n'
The reasoning model:
wo system_prompt (Wrong):
'There are two 'r's in the word "strawberry."'
with system_prompt(Correct):
'\nTo find out how many times the letter 'r' appears in the word "strawberry", I will go through the word character by character and count each occurrence of 'r'.\n\n1. s\n2. t\n3. r\n4. a\n5. w\n6. b\n7. a\n8. r\n9. r\n\nHere, the letter 'r' appears three times in the word "strawberry".\n\n\nThe letter 'r' appears 3 times in the word "strawberry".\n'
It seemed that the System Prompt is the key to reasoning and could lead to correct answer, rather than the GRPO training. But there's one thing I might need to try, I only trained RL for 250 steps. I' ll try to train for 2500 steps to see if the real 'Aha moment' comes out.
The text was updated successfully, but these errors were encountered: