A minimal tensor processing unit (TPU) has been developed, inspired by Google's TPU V2 and V1, featuring a 2D grid architecture for efficient computation. It supports various functions, including multiply-accumulate operations and activation functions, while providing detailed instructions for module integration and testing within the development environment. The project aims to democratize knowledge in chip accelerator design for individuals with varying levels of expertise.