A person's body silhouette recorded by one or more cameras must be recognised across a wide range of potential targets to complete the person's re-identification job. The person re-identification issue is the process of matching together pictures of the same pedestrian taken with several cameras. The issue is further by the pictures' poor resolution, variations in lighting, and the appearance of held items like a bag from various vantage points. The major problems arise from the fact that several images of the same individual are taken at various points in time and with various cameras. The aligned re-id method may help with some of the problems associated with person re-identification. By using Aligned Re- ID, a global feature may be extracted and learned together with local ones. We integrate several losses to limit the model since the intra-class distance in person re-identification should be smaller than the inter-class distance. Using three common benchmark datasets, Market-1501, CUHK03 (detected and labelled), and CUHK-SYSU, we test our method's performance and find that it outperforms existing methods.