International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
SJGC-2026
Conferences Published ↓
IC-AIRCM-T3-2026
SPHERE-2025
AIMAR-2025
SVGASCA-2025
ICCE-2025
Chinai-2023
PIPRDA-2023
ICMRS'23
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 8 Issue 2
March-April 2026
Indexing Partners
AlignGPT: A Curriculum-Regularized Transformer Framework for Pedagogically Aligned Educational Language Modeling
| Author(s) | Kinshuk Dutta, Sabyasachi Paul, Ankit Anand |
|---|---|
| Country | United States |
| Abstract | Transformer-based language models exhibit remark- able linguistic fluency but remain poorly aligned with the pedagogical requirements of formal education. In instructional settings, correctness is defined not only by factual accuracy but by adherence to curriculum scope, sequencing, and learning objec- tives. This paper introduces AlignGPT, a curriculum-regularized transformer framework that formalizes pedagogical alignment as an explicit optimization objective during fine-tuning. Building upon prior syllabus-driven adaptations such as StudentGPT [1], we propose a curriculum alignment loss and curriculum cov- erage regularization to address both semantic relevance and topic imbalance. We provide theoretical justifications for these components, including differentiability properties, convergence guarantees under stochastic optimization, bounds on alignment deviation, and a generalization bound for the regularized ob- jective. Empirical evaluations on simulated educational datasets demonstrate superior alignment scores (mean improvement of 18.4% over baselines) and reduced coverage imbalance (from 34.7% to 12.1% relative frequency skew). All experiments use resources and tooling available in 2021–early 2022, supporting reproducibility in constrained computational environments. Index Terms—Transformer Models, Curriculum Learning, Pedagogical Alignment, Educational NLP, Fine-Tuning, Ethical AI, Language Model Regularization, Optimization Theory. |
| Keywords | Transformer Models, Curriculum Learning, Pedagogical Alignment, Educational NLP, Fine-Tuning, Ethical AI, Language Model Regularization, Optimization Theory. |
| Field | Engineering |
| Published In | Volume 3, Issue 3, May-June 2021 |
| Published On | 2021-06-10 |
| DOI | https://doi.org/10.36948/ijfmr.2021.v03i03.67508 |
Share this

E-ISSN 2582-2160
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
Powered by Sky Research Publication and Journals