This paper presents the development of a compact and effective language model inspired by the LLaMA architecture. Our focus was on constructing this model based on the fundamental principles of LLaMA, which have influenced our architectural decisions and training methods. We sought to explore innovative research avenues and expand the possibilities achievable with limited resources. By leveraging open-source datasets and sophisticated training techniques, we made notable advancements without relying on extensive computational power or proprietary data. Nevertheless, due to resource limitations, the model work in progress. Researchers with access to more substantial computational capabilities could build on this foundation to enhance the model’s performance. We hope this paper encourages others in the field to contribute to the development of more robust language models that are accessible to all. Key Parameters for training include context window size, number of layers, batch size, and model dimensions. Results are evaluated based on epoch count, execution duration, model parameters, and validation loss.