Comparative Study on Accuracy of Responses by Select AI Tools: ChatGPT and Perplexity AI Visa Vee Human Responses

dc.contributor.author*1Salmon Oliech Owidi
dc.contributor.author2 Joanne Nabwire Lyanda
dc.contributor.author3 Eric W. Wangila
dc.date.accessioned2025-11-26T13:51:45Z
dc.date.issued2024-11
dc.description.abstractThis study explored questions whose solutions were provided by human experts, ChatGPT, and Perplexity AI. The responses were triangulated in discussions to identify oversights, alternative frames, and biases against human-generated insights. ChatGPT and Perplexity AI were selected due to their popularity, with ChatGPT gaining over 100 million users and Perplexity AI 87 million within a year. Educational specialists submitted questions across various fields, along with their responses, which were subsequently posed to the AI tools. These responses were coded and evaluated by twelve educational specialists and subject matter experts (N = 24) based on scientific accuracy, actionability, and comprehensibility. Descriptive statistics indicated that Human Experts achieved significantly higher mean scores in both Scientific Accuracy (M = 7.42, SD = 0.65) and Actionability (M = 7.25, SD = 0.77) compared to ChatGPT (M = 6.25, SD = 0.71; M = 5.42, SD = 0.99) and Perplexity AI (M = 4.33, SD = 0.79; M = 4.17, SD = 1.06). In terms of Comprehensibility, ChatGPT led with a mean score of 6.58 (SD = 0.99) compared to Human Experts (M = 7.08, SD = 1.24) and Perplexity AI (M = 5.43, SD = 0.55). Kruskal-Wallis tests revealed significant differences across all dimensions (p < 0.001 for Scientific Accuracy and Actionability; p = 0.015 for Comprehensibility). Post-hoc Dunn's tests confirmed that Human Experts outperformed both AI tools, while ChatGPT was significantly more comprehensible than Perplexity AI. These findings highlight the limitations of AI in delivering scientifically accurate and actionable insights due to factors like lack of emotional intelligence and common sense. The study recommends careful evaluation of AI integration in academic and research
dc.identifier.citationComparative Study on Accuracy of Responses by Select AI Tools: ChatGPT and Perplexity AI Visa Vee Human Responses
dc.identifier.issn2456-2165
dc.identifier.urihttps://repository.tmu.ac.ke/handle/123456789/238
dc.language.isoen
dc.publisherInternational Journal of Innovative Science and Research Technology
dc.relation.ispartofseriesVolume 9, Issue 11
dc.subjectArtificial Intelligence Tools
dc.subjectChatGPT
dc.subjectPerplexity AI
dc.subjectComparative Study.
dc.titleComparative Study on Accuracy of Responses by Select AI Tools: ChatGPT and Perplexity AI Visa Vee Human Responses
dc.typeArticle

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
TMU 1 2025.pdf
Size:
1015.31 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed to upon submission
Description: