Artificial Intelligence and Applications https://ojs.bonviewpress.com/index.php/AIA <p>Artificial intelligence is a discipline of science and technology for making intelligent machines to simulate human abilities in learning, perception, thinking, decision, behavior and interaction. The technology is widely used in various areas in society, including sensing data analysis and understanding, big data analytics, security surveillance, management and planning, education, medical care, robotics, unmanned driving, and so on.</p> <p>Keeping AI's impact on society beneficial motivates research in many areas, from economics and law to technical topics. <strong><em>Artificial Intelligence and Applications (AIA)</em></strong> is a peer-reviewed journal publishing original contributions to the theory, methods and applications of artificial intelligence. AIA aims to enhance the development and application of artificial intelligence by bringing scientists in the field together. Research articles and reviews on all scientific or application topics in AI are welcome.</p> <p>The journal is a Gold Open Access journal, online readers don't have to pay any fee.</p> <p><strong>The journal is currently free to the authors, and all Article Processing Charges (APCs) are waived until 31 December 2024.</strong></p> Bon View Publishing Pte Ltd. en-US Artificial Intelligence and Applications 2811-0854 Artificial Intelligence Application in Law: A Scientometric Review https://ojs.bonviewpress.com/index.php/AIA/article/view/729 <p>Several topics, problems, and established legal principles are already being challenged using artificial intelligence (AI) in numerous applications. The powers of AI have been snowballing to the point where it is evident that AI applications in law and various economic sectors aid in promoting a good society. However, questions such as who the prolific authors, papers, and institutions are, as well as what the specific and thematic areas of application are, remain unanswered. In the current study, 177 papers on AI applications in law published between 1960 and April 29, 2022, were pulled from Scopus using keywords and analysed scientometrically. We identified the strongest citation bursts, the most prolific authors, countries/regions, and primary research interests, as well as their evolution trends and collaborative relationships over the past 62 years. The analysis also identified co-authorship networks, collaboration networks of countries/regions, co-occurrence networks of keywords, and timeline visualisation of keywords. This study concludes that systematic study and enough attention are still lacking in AI application in law (AIL). The methodical design of the required platforms, as well as the collecting, cleansing, and storage of data, and the collaboration of many stakeholders, researchers, and nations/regions are all problems that AIL must still overcome. Both researchers and industry professionals who are devoted to AIL will find value in the findings.</p> <p> </p> <p><strong>Received:</strong> 4 February 2023 <strong>| Revised:</strong> 25 May 2023 <strong>| Accepted:</strong> 9 June 2023</p> <p> </p> <p><strong>Conflicts of Interest</strong></p> <p>The authors declare that they have no conflicts of interest to this work.</p> <p> </p> <p><strong>Data Availability Statement</strong></p> <p>The data that support the findings of this study are openly available at https://www.scopus.com.</p> Isaac Kofi Nti Samuel Boateng Juanita Ahia Quarcoo Peter Nimbe Copyright (c) 2023 Authors https://creativecommons.org/licenses/by/4.0 2023-06-30 2023-06-30 2 1 1 10 10.47852/bonviewAIA3202729 Let’s Have a Chat! A Conversation with ChatGPT: Technology, Applications, and Limitations https://ojs.bonviewpress.com/index.php/AIA/article/view/939 <p>The advent of artificial intelligence-empowered chatbots capable of constructing human-like sentences and articulating cohesive essays has captivated global interest. This paper provides a historical perspective on chatbots, focusing on the technology underpinning the Chat Generative Pre-trained Transformer, better known as ChatGPT. We underscore the potential utility of ChatGPT across a multitude of fields, including healthcare, education, and research. To the best of our knowledge, this is the first review that not only highlights the applications of ChatGPT in multiple domains but also analyzes its performance on examinations across various disciplines. Despite its promising capabilities, ChatGPT raises numerous ethical and privacy concerns that are meticulously explored in this paper. Acknowledging the current limitations of ChatGPT is crucial in understanding its potential for growth. We also ask ChatGPT to provide its point of view and present its responses to several questions we attempt to answer.</p> <p> </p> <p><strong>Received:</strong> 5 April 2023<strong> | Revised:</strong> 23 May 2023<strong> | Accepted:</strong> 29 May 2023</p> <p> </p> <p><strong>Conflicts of Interest</strong></p> <p>The authors declare that they have no conflicts of interest to this work.</p> <p> </p> <p><strong>Data Availability Statement</strong></p> <p>Data sharing is not applicable to this article as no new data were created or analyzed in this study.</p> Sakib Shahriar Kadhim Hayawi Copyright (c) 2023 Authors https://creativecommons.org/licenses/by/4.0 2023-06-02 2023-06-02 2 1 11 20 10.47852/bonviewAIA3202939 Attention Enhanced Siamese Neural Network for Face Validation https://ojs.bonviewpress.com/index.php/AIA/article/view/1018 <p>Few-shot computer vision algorithms have enormous potential to produce promised results for innovative applications which only have a small volume of example data for training. Currently, the few-shot algorithm research focuses on applying transfer learning on deep neural networks that are pre-trained on big datasets. However, adapting the transformers requires highly cost computation resources. In addition, the overfitting or underfitting problems and low accuracy on large classes in the face validation domain are identified in our research. Thus, this paper proposed an alternative enhancement solution by adding contrasted attention to the negative face pairs and positive pairs to the training process. Extra attention is created through clustering-based face pair creation algorithms. The evaluation results show that the proposed approach sufficiently addressed the problems without requiring high-cost resources.</p> <p> </p> <p><strong>Received:</strong> 26 April 2023 <strong>| Revised:</strong> 4 July 2023 <strong>| Accepted:</strong> 26 July 2023</p> <p> </p> <p><strong>Conflicts of Interest</strong></p> <p>Hong Qing Yu is an associate editor for <em>Artificial Intelligence and Applications</em>, and was not involved in the editorial review or the decision to publish this article. The author declares that he has no conflicts of interest to this work.</p> <p> </p> <p><strong>Data Availability Statement</strong></p> <p>The data that support the findings of this study are openly available in Kaggle: https://www.kaggle.com/datasets/olgabelitskaya/yale-face-database and https://www.kaggle.com/datasets/jessicali9530/lfw-dataset.</p> Hong Qing Yu Copyright (c) 2023 Author https://creativecommons.org/licenses/by/4.0 2023-08-14 2023-08-14 2 1 21 27 10.47852/bonviewAIA32021018 Exploring the Capabilities and Limitations of ChatGPT and Alternative Big Language Models https://ojs.bonviewpress.com/index.php/AIA/article/view/820 <p>ChatGPT, an artificial intelligence (AI)-powered chatbot developed by OpenAI, has gained immense popularity since its public launch in November 2022. With its ability to write essays, emails, poems, and even computer code, it has become a useful tool for professionals in various fields. However, ChatGPT’s responses are not always rooted in reality and are instead generated by a Generative Adversarial Network (GAN). This paper aims to build a text classification model for a chatbot using Python. The model is trained on a dataset consisting of customer responses to a survey and their corresponding class labels. Many classifiers are trained and tested, such as naive Bayes, random forest, extra trees, and decision trees. The results show that the extra trees classifier performs the best with an accuracy of 90%. The system demonstrates the importance of text preprocessing and selecting appropriate classifiers for text classification tasks in building an effective chatbot. In this paper, we also explore the capabilities and limitations of ChatGPT and its alternatives in 2023. We present a comprehensive overview of the alternative’s performance. The work here concludes with a discussion of the future directions of large language models and their impact on society and technology.</p> <p> </p> <p><strong>Received:</strong> 1 March 2023 <strong>| Revised:</strong> 26 April 2023 <strong>| Accepted:</strong> 26 April 2023</p> <p> </p> <p><strong>Conflicts of Interest</strong></p> <p>The authors declare that they have no conflicts of interest to this work.</p> <p> </p> <p><strong>Data Availability Statement</strong></p> <p>Data sharing is not applicable to this article as no new data were created or analyzed in this study.</p> Shadi AlZu'bi Ala Mughaid Fatima Quiam Samar Hendawi Copyright (c) 2023 Authors https://creativecommons.org/licenses/by/4.0/ 2023-04-27 2023-04-27 2 1 28 37 10.47852/bonviewAIA3202820 KELL: A Kernel-Embedded Local Learning for Data-Intensive Modeling https://ojs.bonviewpress.com/index.php/AIA/article/view/1381 <p>Kernel methods are widely used in machine learning. They introduce a nonlinear transformation to achieve a linearization effect: using linear methods to solve nonlinear problems. However, typical kernel methods like Gaussian process regression suffer from a memory consumption issue for data-intensive modeling: the memory required by the algorithms increases rapidly with the growth of data, limiting their applicability. Localized methods can split the training data into batches and largely reduce the amount of data used each time, thus effectively alleviating the memory pressure. This paper combines the two approaches by embedding kernel functions into local learning methods and optimizing algorithm parameters including the local factors, model orders. This results in the kernel-embedded local learning (KELL) method. Numerical studies show that compared with kernel methods like Gaussian process regression, KELL can significantly reduce memory requirements for complex nonlinear models. And compared with other non-kernel methods, KELL demonstrates higher prediction accuracy.</p> <p> </p> <p><strong>Received:</strong> 20 July 2023 <strong>| Revised:</strong> 21 September 2023 <strong>| Accepted:</strong> 1 November 2023</p> <p> </p> <p><strong>Conflicts of Interest</strong></p> <p>The author declares that he has no conflicts of interest to this work.</p> <p> </p> <p><strong>Data Availability Statement</strong></p> <p>Data sharing is not applicable to this article as no new data were created or analyzed in this study</p> Changtong Luo Copyright (c) 2023 Author https://creativecommons.org/licenses/by/4.0 2023-11-10 2023-11-10 2 1 38 44 10.47852/bonviewAIA32021381 Methodological Characterization and Computational Codes in the Simulation of Interacting Galaxies https://ojs.bonviewpress.com/index.php/AIA/article/view/743 <p>Currently, technological development has exponentially fostered a growing collection of dispersed and diversified information. In the case of galaxy interaction studies, it is important to identify and recognize the parameters in the process, the tools, and the computational codes available to select the appropriate one in depending on the availability of data. The objective was to characterize the parameters, techniques, and methods developed, as well as the computational codes for numerical simulation. From the bibliography, it was reviewed how various authors have studied the interaction, presence of gas and star formation, and then the review of computer codes with the requirements and benefits, to analyze and compare the initial and boundary conditions. With images, the convolutional neural network method programmed in python was applied to identify the differences and their possible accuracy. Smoothed Particle Hydrodynamics systems use more robust algorithms with invariance, simplicity in implementation, flexible geometries, but do not parameterize artificial viscosities, discontinuous solutions, and instabilities. In the case of adaptive mesh refinement, there is no artificial viscosity, resolution of discontinuities, and suppression of instabilities but with complex implementation, mesh details, and resolution problems, and they are not scalable. It is necessary to use indirect methods to infer some properties or perform preliminary iterations. The availability of observable data governs the behavior of possible numerical simulations, in addition to having tools such as a supercomputer, generating errors that can be adjusted, compared, or verified, according to the techniques and methods shown in this study, in addition to the fact that codes that are not so well known and used stand out as those that are currently more applied.</p> <p> </p> <p><strong>Received:</strong> 8 February 2023 <strong>| Revised:</strong> 23 May 2023<strong> | Accepted:</strong> 16 June 2023</p> <p> </p> <p><strong>Conflicts of Interest</strong></p> <p>The authors declare that they have no conflicts of interest to this work.</p> <p> </p> <p><strong>Data Availability Statement</strong></p> <p>Data sharing is not applicable to this article as no new data were created or analyzed in this study.</p> Eduardo Teófilo-Salvador Patricia Ambrocio-Cruz Margarita Rosado-Solís Copyright (c) 2023 Authors https://creativecommons.org/licenses/by/4.0 2023-06-27 2023-06-27 2 1 45 58 10.47852/bonviewAIA3202743 Heart Disease Prediction Using Support Vector Machine and Artificial Neural Network https://ojs.bonviewpress.com/index.php/AIA/article/view/823 <p>Heart-related illnesses, often known as cardiovascular diseases, have been the leading cause of mortality globally over the past several decades and are now recognized as the most major illness in both India and the rest of the globe. The severity out of the disease can be avoided with proper care at proper stage. This disease claims early and accurate prediction to avoid causalities. As proper medical support is not adequate, diseases are not being identified at the proper time and treatment cannot be started. Machine learning algorithms have shown promise in predicting heart disease risk based on patient data. In this study, a machine learning-based heart disease prediction model has been presented. The objective of the work is to build a machine learning-based model for early and adequate prediction of heart disease. The proposed model has utilized support vector machine and artificial intelligence with an accuracy of 81.6% and 86.6%, respectively. The findings show that the model predicts heart disease risk with excellent accuracy, sensitivity, and specificity, offering healthcare professionals a useful tool to pinpoint people who may be more at risk of developing heart disease.</p> <p> </p> <p><strong>Received:</strong> 6 March 2023<strong> | Revised: </strong>17 April 2023<strong> | Accepted: </strong>23 April 2023 </p> <p> </p> <p><strong>Conflicts of Interest</strong></p> <p>The authors declare that they have no conflicts of interest to this work.</p> <p> </p> <p><strong>Data Availability Statement</strong></p> <p>Data sharing is not applicable to this article as no new data were created or analyzed in this study.</p> Aishwarya Mondal Banhishikha Mondal Amaresh Chakraborty Agnita Kar Ayanty Biswas Annwesha Banerjee Majumder Copyright (c) 2023 Authors https://creativecommons.org/licenses/by/4.0/ 2023-04-27 2023-04-27 2 1 59 65 10.47852/bonviewAIA3202823 A Task Performance and Fitness Predictive Model Based on Neuro-Fuzzy Modeling https://ojs.bonviewpress.com/index.php/AIA/article/view/1010 <p>Recruiters' decisions in the selection of candidates for specific job roles are not only dependent on physical attributes and academic qualifications but also on the fitness of candidates for the specified tasks. In this paper, we propose and develop a simple neuro-fuzzy-based task performance and fitness model for the selection of candidates. This is accomplished by obtaining from Kaggle (an online database) samples of task performance-related data of employees in various firms. Data were preprocessed and divided into 60%, 20%, and 20% for training, validating, and testing the developed neuro-fuzzy-based task performance model respectively. The most significant factors influencing the performance and fitness rating of workers were selected from the database using the Principal Components Analysis (PCA) ranking technique. The effectiveness of the proposed model was assessed, and discovered to generate an accuracy of 0.997%, 0.08% Root Mean Square Error (RMSE), and 0.042% Mean Absolute Error (MAE).</p> <p> </p> <p><strong>Received:</strong> 25 April 2023 <strong>| Revised:</strong> 13 July 2023<strong> | Accepted:</strong> 27 July 2023</p> <p> </p> <p><strong>Conflicts of Interest</strong></p> <p>The authors declare that they have no conflicts of interest to this work.</p> <p> </p> <p><strong>Data Availability Statement</strong></p> <p>The data that support the findings of this study are openly available in Kaggle dataset, Employee Attrition and Factors at <a href="https://www.kaggle.com/datasets/thedevastator/employee-attrition-%20and-factors">https://www.kaggle.com/datasets/thedevastator/employee-attrition- and-factors</a>.</p> Femi Johnson Onashoga Adebukola Oluwafolake Ojo Adejimi Alaba Opakunle Victor Copyright (c) 2023 Authors https://creativecommons.org/licenses/by/4.0/ 2023-08-03 2023-08-03 2 1 66 72 10.47852/bonviewAIA32021010