Stability in Readability and Syntactic Complexity of GPT-3.0-generated Texts
A investigation into the linguistic features of GPT-3.0-generated texts to explore their use in detecting AI-generated content and curbing academic dishonesty.
We found that GPT-3.0-generated texts exhibit lower variance in linguistic metrics like readability and syntactic complexity compared to human-written texts, indicating more uniformity in style and structure.
We also trained machine learning models, from gradient boosted decision trees to deep learning, to classify texts as AI-generated or human-written based on extracted features, achieving 90.3% accuracy.








