View a PDF of the paper titled GraCoRe: Benchmarking Graph Comprehension and Complex Reasoning in Large Language Models, by Zike Yuan and 3 other authors
Abstract:Evaluating the graph comprehension and reasoning abilities of Large Language Models (LLMs) is challenging and often incomplete. Existing benchmarks focus primarily on pure graph understanding, lacking a comprehensive evaluation across all graph types and detailed capability definitions. This paper presents GraCoRe, a benchmark for systematically assessing LLMs’ graph comprehension and reasoning. GraCoRe uses a three-tier hierarchical taxonomy to categorize and test models on pure graph and heterogeneous graphs, subdividing capabilities into 10 distinct areas tested through 19 tasks. Our benchmark includes 11 datasets with 5,140 graphs of varying complexity. We evaluate four closed-source and eight open-source LLMs, conducting thorough analyses from both ability and task perspectives. Key findings reveal that OpenAI o1 model has amazing comprehension and reasoning capabilities, semantic enrichment enhances reasoning performance, node ordering impacts task success, and the ability to process longer texts does not necessarily improve graph comprehension or this http URL is open-sourced at this https URL
Submission history
From: Zike Yuan [view email]
[v1]
Wed, 3 Jul 2024 09:12:38 UTC (3,396 KB)
[v2]
Wed, 26 Feb 2025 09:17:32 UTC (2,028 KB)