Michael D. Murray, University of Kentucky College of Law, has published Artificial Intelligence for Academic Support in Law Schools and Universities. Here is the abstract.
The current models of verbal generative artificial intelligence (AI)—Bing Chat, GPT-4 and Chat GPT, Bard, Claude, and others, and the current models of visual generative AI—DALL-E 2, Midjourney, Stable Diffusion, and others—can play a significant role in academic support in law schools and universities. Generative AI can help a student learn and understand material better, more deeply, and notably faster than traditional means of reading, rereading, notetaking, and outlining. AI can explain, elaborate on, and summarize course material. It can write and administer formative assessments, and, if desired, it can write self-guided summative evaluations and grade them. AI can translate material into and from foreign languages with a fidelity to context, usage, and nuances of meaning not previously seen in machine learning or neural network translation services. AI also can visualize material using the tools of visual generative AI that literally paint pictures of the subjects and situations in the material that can overcome students’ literacy issues both in the native language of the communication and in the students’ own native languages. Beyond supporting student learning and academic success, AI can be a democratizing force because it can empower students to begin writing or drawing or painting at a level that their own life experiences and education have not prepared them or enabled them to participate in. AI can empower students to perform creative, artistic, or literary activities related to legal education and law practice at a high level, catching them up to where other classmates would start. First-generation college-goers and graduate students can use the collective knowledge of a large language model to bring themselves to a higher starting point in the process of gaining admission to and finding success in legal education and ultimate in the practice of law. The current text-based generative AI models in the form of Microsoft’s BingChat, Goggle’s Bard, OpenAI’s ChatGPT and traditional methods of tutoring and academic support. It does so through the multimodal nature of its skills: AI can explain, elaborate on, and summarize course material; it can interpret, translate, visualize, or reorder parts of the material. AI can evaluate and correct the grammar, spelling, syntax, and style of a piece of student writing, a task which campus writing centers often avoid for pedagogical reasons or simply logistical and resource-driven reasons. AI has become a master translator moving easily from communications in one language and converting them into many languages, and at the same time monitor the grammar, spelling, syntax, and style of the translated work for fidelity of usage in the target language. At the farther reaches, AI can communicate to illiterate and less than fully literate students because several of the current AIs or their close corporate cousins speak the language of images (i.e., visual communication) by generating visuals to illustrate, depict, diagram, or graph a concept. AIs can deliver the gold-standard level of one-on-one, personalized attention for tutoring and support. Naturally, with this amount of power placed in the hands of faculty and administrators wielding AI tools, there is a commensurate amount of responsibility to use the AI professionally, equitably, and ethically. AI chat bots may sound human and exhibit a noticeable personality, but they are not persons, they are tools. The writing of AI sounds very intelligent, but that is not because the AI itself is highly intelligent, but rather the AI has assimilated and synthesized the intelligent words of tens of thousands of intelligent writers and generated text that appears equally intelligent. But the AI does not think. It does not reason. It does not replace thinking for our students. AI merely has the extraordinary capacity to simulate the output of a reasonable, thinking person. Yet, AI can also assimilate lies, biased content, hate speech, and harmful language and synthesize and generate similar content without discriminating the good and the true from the bad and the harmful. Current textual generative AIs are trained on a large language models that did not evaluate their source material for truth or bias, or fairness or hatefulness, in gathering as much data as possible for the AI to work with. Volume of material was the operating criteria for building large language models, not truth, justice, equity, and inclusion. At the same time, AI has the capacity to collect and run through personal and biometric data, again without thinking because AI does not think. This is an important part of the use experience of these models, and one that needs to be communicated to users who would turn to the AI for truth and correction on a wide range of deeply important topics.
Download the essay from SSRN at the link.