Latest News
Publisher: JNU Guangdong Institute of Smart Education
Date: January 3, 2025
In a significant achievement for the Guangdong Institute of Smart Education at Jinan University, six papers authored by faculty and students have been accepted for presentation at the prestigious AAAI 2025 conference, organized by the Association for the Advancement of Artificial Intelligence (AAAI). This announcement was made in December 2024, following a competitive selection process that saw a total of 12,957 submissions, with an acceptance rate of just 23.4%.
The AAAI conference, renowned for its high standards in artificial intelligence and machine learning research, will take place in Philadelphia, Pennsylvania, from February 25 to March 4, 2025. The accepted papers cover a range of innovative topics, including knowledge tracking based on attention mechanisms and cognitive fluctuation modeling.
Overview of Selected Papers
1. Title: Rethinking and Improving Student Learning and Forgetting Processes for Attention-Based Knowledge Tracing Models
Authors: Bai Youheng, Li Xueyi, Liu Zitao, Huang Yaying, Tian Mi, Luo Weiqi
Corresponding Author: Liu Zitao
Summary: This paper introduces LFPKT, a unified architecture that enhances attention-based knowledge tracing models by incorporating relative forgetting attention. It effectively addresses challenges in modeling forgetting behaviors and improves the extrapolation ability of attention mechanisms, demonstrating significant performance improvements across three datasets. The dataset and code will be made publicly available at [pykt.org](https://pykt.org/).
2. Title: Cognitive Fluctuations Enhanced Attention Network for Knowledge
Authors: Hou Mingliang, Li Xueyi, Guo Teng, Liu Zitao, Tian Mi, Luo Renqiang, Luo Weiqi
Corresponding Author: Guo Teng
Summary: This paper presents FlucKT, an attention network designed to model short-term cognitive fluctuations in knowledge tracing tasks. By dynamically reweighting cognitive features and enhancing attention mechanisms, FlucKT shows improved generalization and predictive performance on various datasets.
3. Title: What Are Step-Level Reward Models Rewarding? Counterintuitive Findings from MCTS-Boosted Mathematical Reasoning
Authors: Ma Yiran, Chen Zui, Liu Tianqiao, Tian Mi, Liu Zhuo, Liu Zitao, Luo Weiqi
Corresponding Author: Liu Zitao
Summary: This study explores the mechanisms behind Step Level Reward Models (SRMs) in mathematical reasoning, revealing counterintuitive insights about their effectiveness and providing guidance for developing more efficient SRMs.
4. Title: A Syntactic Approach to Computing Complete and Sound Abstraction in the Situation Calculus
Authors: Fang Liangda, Wang Xiaoman, Chen Chang, Luo Karen, Cui Zhenhe, Guan Quanlong
Corresponding Author: Guan Quanlong
Summary: This research proposes a new syntactic method for calculating abstract action theories from low-level action theories, addressing a gap in existing frameworks.
5. Title: Vision Transformers Beat WideResNets on Small Scale Datasets Adversarial Robustness
Authors: Wu Juntao, Song Ziyu, Zhang Xiaoyu, Xie Shujun, Lin Longxin, Wang Ke
Corresponding Author: Wang Ke
Summary: Challenging the prevailing belief that WideResNet models dominate small-scale datasets, this paper demonstrates that Vision Transformers (ViTs) can achieve superior adversarial robustness and accuracy, paving the way for future research in efficient architectures.
This achievement underscores the Guangdong Institute of Smart Education's commitment to advancing research in artificial intelligence and its applications in education.
Copyright © 2016 Jinan University. All Rights Reserved.