International Journal of Communication Networks and Information Security, 2023
The diagram hypothesis, which began with Euler's answer for the Konigsberg span issue, has since ... more The diagram hypothesis, which began with Euler's answer for the Konigsberg span issue, has since developed into a major instrument for demonstrating interconnected frameworks. Charts address assorted genuine peculiarities , including interpersonal organizations, web search calculations, and, surprisingly , natural communications. Conventional static chart calculations have been generally examined since the 1940s, yet they battle to deal with certifiable situations where diagram structures consistently change. This has prompted the improvement of dynamic diagram calculations that can proficiently refresh arrangements as charts advance after some time. Dynamic chart calculations offer an answer by handling refreshes-like increases or cancellations of edges and vertices-rather than recomputing the whole diagram. These calculations are essential for applications like web indexes, continuous organization investigation, and intuitive frameworks where chart structures change steadily. Dynamic calculations can be gradual (taking care of just augmentations), decremental (taking care of just erasures), or unique (dealing with both), with each sort introducing unmistakable difficulties and advancement valuable open doors. This study investigates equally powerful diagram calculations, zeroing in on their effectiveness, versatility, and materialness to huge scope charts. By utilizing parallelism across multicore frameworks, GPUs, and conveyed structures, these calculations intend to fulfill the needs of current computational conditions. Moreover, The intricacies of completely powerful calculations are addressed, including the compromises between handling group refreshes and maintaining ongoing responsiveness. The examination also covers the design of existence-efficient techniques, with their application in real-world scenarios such as website streamlining, network analysis , and real-time interactive systems.
Automatic text summarization has recently become popular in natural language processing because o... more Automatic text summarization has recently become popular in natural language processing because of its ability to minimize an overwhelming quantity of information into a summarization. This research aims at analyzing extractive and abstractive approaches to the automatic text summarization through the help of deep learning models. The work mainly deals with the analysis of the performance of models like Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) networks and Transformer-based models including BERT and GPT. Pertaining to the evaluation of these models, the study employs objective metrics such as ROUGE, BLEU, in addition to subjective human evaluation of coherence, relevance and fluency. Studies show that Transformer-based models: BERT and GPT perform better than the extractive model in every aspect in the ability to produce summaries with high fluency and context relevancy. However, there are still problems associated with growing the scope of higher-order n-gram recall and preserving summary relevance to the text information. The author of the study also supports the fact that deep learning-based summarization methods demonstrate high potential but require additional studies to improve the quality of the output summary. The work offers an understanding of the elements of the existing models and offers a foundation for future development in automatic summarization.
International Journal of Communication Networks and Information Security, 2023
The diagram hypothesis, which began with Euler's answer for the Konigsberg span issue, has since ... more The diagram hypothesis, which began with Euler's answer for the Konigsberg span issue, has since developed into a major instrument for demonstrating interconnected frameworks. Charts address assorted genuine peculiarities , including interpersonal organizations, web search calculations, and, surprisingly , natural communications. Conventional static chart calculations have been generally examined since the 1940s, yet they battle to deal with certifiable situations where diagram structures consistently change. This has prompted the improvement of dynamic diagram calculations that can proficiently refresh arrangements as charts advance after some time. Dynamic chart calculations offer an answer by handling refreshes-like increases or cancellations of edges and vertices-rather than recomputing the whole diagram. These calculations are essential for applications like web indexes, continuous organization investigation, and intuitive frameworks where chart structures change steadily. Dynamic calculations can be gradual (taking care of just augmentations), decremental (taking care of just erasures), or unique (dealing with both), with each sort introducing unmistakable difficulties and advancement valuable open doors. This study investigates equally powerful diagram calculations, zeroing in on their effectiveness, versatility, and materialness to huge scope charts. By utilizing parallelism across multicore frameworks, GPUs, and conveyed structures, these calculations intend to fulfill the needs of current computational conditions. Moreover, The intricacies of completely powerful calculations are addressed, including the compromises between handling group refreshes and maintaining ongoing responsiveness. The examination also covers the design of existence-efficient techniques, with their application in real-world scenarios such as website streamlining, network analysis , and real-time interactive systems.
Automatic text summarization has recently become popular in natural language processing because o... more Automatic text summarization has recently become popular in natural language processing because of its ability to minimize an overwhelming quantity of information into a summarization. This research aims at analyzing extractive and abstractive approaches to the automatic text summarization through the help of deep learning models. The work mainly deals with the analysis of the performance of models like Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) networks and Transformer-based models including BERT and GPT. Pertaining to the evaluation of these models, the study employs objective metrics such as ROUGE, BLEU, in addition to subjective human evaluation of coherence, relevance and fluency. Studies show that Transformer-based models: BERT and GPT perform better than the extractive model in every aspect in the ability to produce summaries with high fluency and context relevancy. However, there are still problems associated with growing the scope of higher-order n-gram recall and preserving summary relevance to the text information. The author of the study also supports the fact that deep learning-based summarization methods demonstrate high potential but require additional studies to improve the quality of the output summary. The work offers an understanding of the elements of the existing models and offers a foundation for future development in automatic summarization.
Uploads
Papers by Muhammad Saqib