<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>Abstracts of the graduate colloquium 2025</title>
<link>http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4862</link>
<description/>
<pubDate>Tue, 28 Apr 2026 07:52:18 GMT</pubDate>
<dc:date>2026-04-28T07:52:18Z</dc:date>

<item>
<title>Title page</title>
<link>http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4873</link>
<description>Title page
</description>
<pubDate>Wed, 19 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4873</guid>
<dc:date>2025-02-19T00:00:00Z</dc:date>
</item>
<item>
<title>Front materials</title>
<link>http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4872</link>
<description>Front materials
</description>
<pubDate>Wed, 19 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4872</guid>
<dc:date>2025-02-19T00:00:00Z</dc:date>
</item>
<item>
<title>A Comparative Analysis of Deep Learning Algorithms for  Formality Classification in Texts Using Linguistic Features</title>
<link>http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4871</link>
<description>A Comparative Analysis of Deep Learning Algorithms for  Formality Classification in Texts Using Linguistic Features
Karunarathna, K.M.G.S.; Rupasingha, R.A.H.M.; Kumara, B.T.G.S.
Because of the wide variety of formal and informal writing styles brought about by &#13;
the rapid growth of digital communication, the classification of documents based on &#13;
it becomes a challenging task. Using a variety of variables, this work seeks to increase &#13;
the accuracy of formality classification algorithms. Grammar, vocabulary, &#13;
punctuation, and sentence structure are some stylistic components that define various &#13;
writing styles, and traditional approaches have trouble distinguishing between them. &#13;
Differentiating between formal and informal language is becoming increasingly &#13;
important in applications such as research papers, legal documents, informal letters, &#13;
NEWS, etc. The objective of this approach is to use linguistic features, examines how &#13;
well deep learning algorithms classify documents as formal and informal. The study &#13;
collected dataset of 5,000 text samples. The text files contained 2500 formal letters, &#13;
news items as formal documents, and remaining are personal blogs, personal letters &#13;
as informal documents.   Next pre-processed all data using stop word removal, &#13;
lemmatization, tokenization and lowercasing. Formal and informal categories which &#13;
include pronouns, grammar, vocabulary, slang, acronyms, language and initialisms &#13;
seven linguistic features were targeted for this study and those features are extracted. &#13;
Then these seven features are combined to generated the feature vector for each &#13;
document. The generated feature vector was applied and in order to classify &#13;
documents, three deep learning models Artificial Neural Networks (ANN), &#13;
Convolutional Neural Networks (CNN), and Long Short-Term Memory (LSTM) &#13;
networks are trained. Here, ANN learns nonlinear patterns in data, CNN identifies &#13;
text sections, and LSTM considers word position and those are selected based on the &#13;
literature review. The performance of each model is compared using different test &#13;
splitting methods and cross-validation techniques. According to experimental data, &#13;
the LSTM model outperforms ANN and CNN in terms of precision, recall and f&#13;
measure metrics, achieving the highest classification accuracy of 89.4% with an &#13;
epoch size of 100 and a batch size of 32 with lowest error rate for Mean Absolute &#13;
Error and Root Mean Squared Error. The results highlight how well LSTM can detect &#13;
linguistic subtleties and offer suggestions for improving formality recognition in &#13;
Natural Language Processing applications, which will help with more context&#13;
sensitive text classification.
</description>
<pubDate>Wed, 19 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4871</guid>
<dc:date>2025-02-19T00:00:00Z</dc:date>
</item>
<item>
<title>Towards Ethical Inference in Language Models: Integrating  Religious Data and Enhancing Responsible LLM  Development</title>
<link>http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4870</link>
<description>Towards Ethical Inference in Language Models: Integrating  Religious Data and Enhancing Responsible LLM  Development
Ranasinghe, K.S.; Paik, I.
Large Language Models (LLM) have emerged as the most powerful tools to perform &#13;
various aspects in daily life. These models are capable of diverse tasks including text &#13;
understanding and generating, image generation, language translation and sentiment &#13;
analysis. Continual advancements in LLM are expanding the scope of their &#13;
capabilities enabling wide range of applications. Although LLMs have made &#13;
significant progress, still there are challenges and limitations that needs to be &#13;
addressed. As the existing LLM models generally focus on the natural language &#13;
processing related tasks, it is crucial to emphasize the training and fine-tuning of &#13;
ethical LLMs. When developing and fine-tuning LLMs, issues such as biased &#13;
responses and lack of moral consistency can arise. This could lead to significant &#13;
ethical challenges, particularly because the data used for training heavily influences &#13;
the model’s outputs.  Developing a specific ethical LLM by establishing a benchmark &#13;
for ethical performance could help overcome this problem. The primary goal of this &#13;
research is to implement an ethical inferences language model which can make the &#13;
predictions based on the religious data. Religious data is used for the fine-tuning and &#13;
Llama-2-7B-chat model is used along with Low Rank Adaptation techniques. The &#13;
fine-tuned model was tested by generating the responses to prompts related to ethical &#13;
scenarios and the accuracy of the model can be calculated. The model trained with &#13;
5000 Bible data. During the training loss decrease gradually by denoting the model &#13;
learns well with the data. The fine-tuned model provides reliable performance when &#13;
working with ethics-related data. Further the Fine-tuned model demonstrated the &#13;
ability to generate text based on ethical prompts, showing a positive trend in the &#13;
generated ethical inferences indicating that this model can be developed further by &#13;
training with more religious data from Bible, Quran, Hindu scriptures and Tripitaka. &#13;
In future the model will be refined further using Supervised Fine Tuning to obtain &#13;
more accurate model with enhanced ethical inference capabilities.
</description>
<pubDate>Wed, 19 Feb 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://repo.lib.sab.ac.lk:8080/xmlui/handle/susl/4870</guid>
<dc:date>2025-02-19T00:00:00Z</dc:date>
</item>
</channel>
</rss>
