Sabaragamuwa University of Sri Lanka

Deep Learning Approach for Painting Authentication: Differentiating AI-Generated and Human-Drawn Paintings

Show simple item record

dc.contributor.author Warnakulasooriya, A.I.
dc.contributor.author Rupasingha, R.A.H.M.
dc.contributor.author Kumara, B.T.G.S.
dc.date.accessioned 2025-12-12T09:11:00Z
dc.date.available 2025-12-12T09:11:00Z
dc.date.issued 2025-02-19
dc.identifier.citation Abstracts of the ComURS2025 Computing Undergraduate Research Symposium 2025, Faculty of Computing, Sabaragamuwa University of Sri Lanka. en_US
dc.identifier.isbn 978-624-5727-57-5
dc.identifier.uri http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4956
dc.description.abstract The arrival of Generative Artificial Intelligence and Diffusion Models revolutionised the artwork creation process with the ability to generate hyper-realistic paintings based on a textual description. Consequently, it’s crucial to distinguish synthetic paintings from human-drawn paintings (HDPs) for museums, art galleries, and art auctions to protect the social, cultural, and monetary value of art and the artists. To fill the absence of specifically detecting AI-generated paintings (AIGPs) from HDPs, this study proposes a deep-learning-based approach utilising a Convolutional Neural Network (CNN) and an Artificial Neural Network (ANN). A diverse and balanced dataset of 3000 paintings across 10 different art styles was captured from the AI-ArtBench dataset. The AIGPs were generated with the equal use of Latent Diffusion and Standard Diffusion models. The ANN and CNN models were built and methodically fine-tuned using hyperparameters. The implemented CNN model achieved 89% classification accuracy at the training dataset size of 30% while the ANN model reached 76% of optimum accuracy at the same training dataset size. The CNN’s superior performance over the ANN was exceptionally evident in detecting discrete artistic patterns, making CNN more suitable for complex painting classification tasks. Furthermore, evaluation metrics such as precision, recall, F1-score, etc., were analysed for both models and compared. Moreover, the key visual features the model relied on classifying paintings were identified by examining the resulting heatmaps of the paintings from the Gradient Class Activation Maps (Grad-CAM) function. Although the study encountered limitations in processing high-resolution images with the available run-time environments of the Google Colab virtual machine, the recommended approach contributes to computer vision advancements and art authentication enhancing the automated art analysis capabilities. In future works, the models' performance across different art styles and the key features specified for each art theme will be analysed. en_US
dc.language.iso en en_US
dc.publisher Faculty of Computing, Sabaragamuwa University of Sri Lanka en_US
dc.subject AI-Generated Paintings en_US
dc.subject ANN en_US
dc.subject Classification en_US
dc.subject CNN en_US
dc.subject Human-Drawn Paintings en_US
dc.title Deep Learning Approach for Painting Authentication: Differentiating AI-Generated and Human-Drawn Paintings en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account