Those are not deepseek R1. They are unrelated models like llama3 from Meta or Qwen from Alibaba “distilled” by deepseek.
This is a common method to smarten a smaller model from a larger one.
Ollama should have never labelled them deepseek:8B/32B. Way too many people misunderstood that.
Maybe try Anki ? It is specifically designed to help memorize stuff long term through spaced repetition.
You will need to create you own cards or find someone else’s cards. Later after learning a card, it will magically schedule a review at the best time to not forget it.
It really help me to learn vocabulary. (enable FSRS from the beginning)