StochasTok is a novel stochastic tokenization method that enhances large language models' (LLMs) understanding of subword structures by randomly splitting tokens during training. This approach significantly improves performance on various subword-level tasks, such as character counting and substring identification, without the high computational costs associated with previous methods. Additionally, StochasTok can be easily integrated into existing pretrained models, yielding considerable improvements with minimal changes.