Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "*1Salmon Oliech Owidi"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • No Thumbnail Available
    Item
    Comparative Study on Accuracy of Responses by Select AI Tools: ChatGPT and Perplexity AI Visa Vee Human Responses
    (International Journal of Innovative Science and Research Technology, 2024-11) *1Salmon Oliech Owidi; 2 Joanne Nabwire Lyanda; 3 Eric W. Wangila
    This study explored questions whose solutions were provided by human experts, ChatGPT, and Perplexity AI. The responses were triangulated in discussions to identify oversights, alternative frames, and biases against human-generated insights. ChatGPT and Perplexity AI were selected due to their popularity, with ChatGPT gaining over 100 million users and Perplexity AI 87 million within a year. Educational specialists submitted questions across various fields, along with their responses, which were subsequently posed to the AI tools. These responses were coded and evaluated by twelve educational specialists and subject matter experts (N = 24) based on scientific accuracy, actionability, and comprehensibility. Descriptive statistics indicated that Human Experts achieved significantly higher mean scores in both Scientific Accuracy (M = 7.42, SD = 0.65) and Actionability (M = 7.25, SD = 0.77) compared to ChatGPT (M = 6.25, SD = 0.71; M = 5.42, SD = 0.99) and Perplexity AI (M = 4.33, SD = 0.79; M = 4.17, SD = 1.06). In terms of Comprehensibility, ChatGPT led with a mean score of 6.58 (SD = 0.99) compared to Human Experts (M = 7.08, SD = 1.24) and Perplexity AI (M = 5.43, SD = 0.55). Kruskal-Wallis tests revealed significant differences across all dimensions (p < 0.001 for Scientific Accuracy and Actionability; p = 0.015 for Comprehensibility). Post-hoc Dunn's tests confirmed that Human Experts outperformed both AI tools, while ChatGPT was significantly more comprehensible than Perplexity AI. These findings highlight the limitations of AI in delivering scientifically accurate and actionable insights due to factors like lack of emotional intelligence and common sense. The study recommends careful evaluation of AI integration in academic and research

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback
Repository logo COAR Notify