Address
304 North Cardinal St.
Dorchester Center, MA 02124
Work Hours
Monday to Friday: 7AM - 7PM
Weekend: 10AM - 5PM
Abstract: Recent developments in artificial intelligence (AI) have significantly influenced educational technologies, reshaping the teaching and learning landscape. However, the notion of fully automating the teaching process remains contentious. This paper explores the concept of hybrid intelligence (HI), which emphasizes the synergistic collaboration between AI and humans to optimize learning outcomes. Despite the potential of AI-enhanced learning systems, their application in a human-AI collaboration system often fails to meet anticipated standards, and there needs to be more empirical evidence showcasing their effectiveness. To address this gap, this study investigates whether formative feedback in an HI learning environment helps law students learn from their errors and write more structured and persuasive legal texts. We conducted a field experiment in a law course to analyse the impact of formative feedback on the exam results of 43 law students, as well as on the writer (students), the writing product and the writing process. In the control group, students received feedback conforming to the legal common practice, where they solved legal problems and subsequently received general feedback from a lecturer based on a sample solution. Students in the treatment group were provided with formative feedback that specifically targeted their individual errors, thereby stimulating internal cognitive processes within the students. Our investigation revealed that participants who were provided with formative feedback rooted in their errors within structured and persuasive legal writing outperformed the control group in producing qualitative, better legal text during an exam. Furthermore, the analysed qualitative student statements also suggest that formative feedback promotes students‘ self-efficacy and self-regulated learning. Our findings indicate that integrating formative feedback rooted in individual errors enhances students‘ legal writing skills. This underscores the hybrid nature of AI, empowering students to identify their errors and improve in a more self-regulated manner.
Abstract: Artificial intelligence technologies are rapidly advancing. As part of this development, large language models (LLMs) are increasingly being used when humans interact with systems based on artificial intelligence (AI), posing both new opportunities and challenges. When interacting with LLM-based AI system in a goal-directed manner, prompt engineering has evolved as a skill of formulating precise and well-structured instructions to elicit desired responses or information from the LLM, optimizing the effectiveness of the interaction. However, research on the perspectives of non-experts using LLM-based AI systems through prompt engineering and on how AI literacy affects prompting behavior is lacking. This aspect is particularly important when considering the implications of LLMs in the context of higher education. In this present study, we address this issue, introduce a skill-based approach to prompt engineering, and explicitly consider the role of non-experts‘ AI literacy (students) in their prompt engineering skills. We also provide qualitative insights into students’ intuitive behaviors towards LLM-based AI systems. The results show that higher-quality prompt engineering skills predict the quality of LLM output, suggesting that prompt engineering is indeed a required skill for the goal-directed use of generative AI tools. In addition, the results show that certain aspects of AI literacy can play a role in higher quality prompt engineering and targeted adaptation of LLMs within education. We, therefore, argue for the integration of AI educational content into current curricula to enable a hybrid intelligent society in which students can effectively use generative AI tools such as ChatGPT.
Abstract: This paper explores the evolving field of prompt engineering in Artificial Intelligence (AI), with a focus on Large Language Models (LLMs). As LLMs exhibit remarkable potential in various educational domains, their effective use requires adept prompt engineering skills. We introduce a skill-based approach to prompt engineering and explicitly investigate the impact of using worked examples to facilitate prompt engineering skills among students interacting with LLMs. We propose hypotheses linking prompt engineering, worked examples, and perceived anthropomorphism to the quality of LLM output. Our initial findings support the critical relationship between proficient prompt engineering and the resulting output quality of LLMs. Subsequent phases will further explore the role of worked examples in prompt engineering, aiming to provide practical recommendations for educational improvement and industry application. Additionally, this research aims to shed light on the responsible utilization of LLMs in education and contribute insights to educational practice, research, and organizational development.
Abstract: Motivated by a holistic understanding of AI literacy, this work presents an interdisciplinary effort to make AI literacy measurable in a comprehensive way, considering generic and domain-specific AI literacy as well as AI ethics. While many AI literacy assessment tools have been developed in the last 2-3 years, mostly in the form of self-assessment scales and less frequently as knowledge-based assessments, previous approaches only accounted for one specific area of a comprehensive understanding of AI competence, namely cognitive aspects within generic AI literacy. Considering the demand for AI literacy development for different professional domains and reflecting on the concept of competence in a way that goes beyond mere cognitive aspects of conceptual knowledge, there is an urgent need for assessment methods that capture domain-specific AI literacy on each of the three competence dimensions of cognition, behavior, and attitude. In addition, competencies for AI ethics are becoming more apparent, which further calls for a comprehensive assessment of AI literacy for this very matter. This conceptual paper aims to provide a foundation upon which future AI literacy assessment instruments can be built and provides insights into what a framework for item development might look like that addresses both generic and domain-specific aspects of AI literacy as well as AI ethics literacy, and measures more than just knowledge-related aspects based on a holistic approach.
Abstract: Novice students in law courses or students who encounter legal education face the challenge of acquiring specialized and highly concept-oriented knowledge. Structured and persuasive writing combined with the necessary domain knowledge is challenging for many learners. Recent advances in machine learning (ML) have shown the potential to support learners in complex writing tasks. To test the effects of ML-based support on students’ legal writing skills, we developed the intelligent writing support system LegalWriter. We evaluated the system’s effectiveness with 62 students. We showed that students who received intelligent writing support based on their errors wrote more structured and persuasive case solutions with a better quality of legal writing than the current benchmark. At the same time, our results demonstrated the positive effects on the students’ writing processes.
Abstract: Human-agent interaction is increasingly influencing our personal and work lives through the proliferation of conversational agents in various domains. As such, these agents combine intuitive natural language interactions by also delivering personalization through artificial intelligence capabilities. However, research on CAs as well as practical failures indicate that CA interaction oftentimes fails miserably. To reduce these failures, this paper introduces the concept of building common ground for more successful human-agent interactions. Based on a systematic review our analysis reveals five mechanisms for achieving common ground: (1) Embodiment, (2) Social Features, (3) Joint Action, (4) Knowledge Base, and (5) Mental Model of Conversational Agents. On this basis, we offer insights into grounding mechanisms and highlight the potentials when considering common ground in different human-agent interaction processes. Consequently, we secure further understanding and deeper insights of possible mechanisms of common ground in human-agent interaction in the future.
Abstract: Structured and persuasive writing is essential for effective communication, convincing readers of argument validity, and inspiring action. However, studies indicate a decline in students‘ proficiency in this area. This decline poses challenges in disciplines like law, where success relies on structured and persuasive writing skills. To address these issues, we present the results of our design science research project to develop an AI-based learning system that helps students learn legal writing. Our results from two different experiments with 104 students demonstrate the usefulness of our AI-based learning system to support law students independent of a human tutor, location, and time. Apart from furnishing our integrated software artifact, we also document our assessed design knowledge in the form of a design theory. This marks the first step toward a nascent design theory for the development of AI-based learning systems for legal writing.
Abstract: In einer zunehmend vernetzten Welt gewinnt die Fähigkeit klar strukturiert und überzeugend zu schreiben, insbesondere im Bereich des Rechts, an Bedeutung, da sie eine grundlegende Komponente effektiver juristischer Kommunikation darstellt. Jedoch zeigen Studien, dass die Schreibfähigkeiten von Studierenden in diesem Bereich abnehmen. Um diese Probleme zu überwinden, präsentieren wir ein innovatives KI-basiertes Schreibunterstützungssystem, das Studierenden beim Erlernen des juristischen Schreibens hilft. Das System wurde in mehreren Sitzungen eines Tutoriums an einer deutschen Universität eingesetzt und evaluiert. Die Ergebnisse der Evaluation zeigen die Nützlichkeit unseres KI-basierten Schreibsystems. Unsere Forschung markiert einen wichtigen Meilenstein in der Entwicklung von KI-basierten Lernsystemen für das juristische Schreiben. Sie bildet die Grundlage für zukünftige Fortschritte auf diesem Gebiet und eröffnet neue Möglichkeiten zur Erkundung des Potenzials von KI im Bereich des juristischen Schreibens.
Abstract: We present an annotation approach for capturing structured components and arguments in legal case solutions of German students. Based on the appraisal style, which dictates the structured way of persuasive writing in German law, we propose an annotation scheme with annotation guidelines that identify structured writing in legal case solutions. We conducted an annotation study with two annotators and annotated legal case solutions to capture the structures of a persuasive legal text. Based on our dataset, we trained three transformer-based models to show that the annotated components can be successfully predicted, e.g. to provide users with writing assistance for legal texts. We evaluated a writing support system in which our models were integrated in an online experiment with law students and found positive learning success and users’ perceptions. Finally, we present our freely available corpus of 413 law student case studies to support the development of intelligent writing support systems.
Abstract: As educational organizations face difficulties in providing personalized learning material or individual learning support, pedagogical conversational agents (PCAs) promise individualized learning for students. However, the problem of conversational breakdowns of PCAs and consequently poor learning outcomes still exist. Hence, effective and grounded communication between learners and PCAs is fundamental to improving learning processes and out-comes. As understanding each other and the conversational grounding is crucial for conversations between humans and PCAs, we propose common ground theory as a foundation for designing a PCA. Conducting a design science research project, we propose theory-motivated design principles and instantiate them in a PCA. We evaluate the utility of the artifact with an experimental study in higher education to inform the subsequent design iterations. We contribute design knowledge on conversational agents in learning settings, enabling researchers and practitioners to develop PCAs based on common ground research in education and providing avenues for future research. Thereby, we can secure further understanding of learning processes based on grounding communication.
Abstract: Reading and synthesizing scientific papers is a crucial skill for students. However, many students in higher education struggle to effectively comprehend scientific texts. To address this challenge, research has leveraged computer-assisted reading (CAR) systems to improve students‘ reading comprehension abilities at scale. However, the research and application of CAR in higher education still lack an organized overview and clear terminology due to the multidisciplinary character of the research field (eg, Education didactic, Human-Computer Interaction, or Information Systems). Therefore, we perform a systematic literature review on CAR from an interdisciplinary Information Systems perspective. We take the socio-technical systems theory as a lens to organize, summarize past literature as well to identify white sports for a future research agenda. The main contributions of this paper are the synthesis and consolidation of CAR to create a basis for all researchers investigating the research field of CAR in higher education.
Abstract: Die anstehende KI-Verordnung wird Regulatory Sandboxes (dt.KI-Reallabore) zur Innovationsförderung einführen. Diese sehenmit Art. 54 KI-VO-E eine Ausnahme vom Zweckbindungs-grundsatz für das Training von KI-Anwendungen mit personen-bezogenen Daten vor. Der Beitrag untersucht, inwiefern dieseAusnahme vom Zweckbindungsgrundsatz mit der DSGVO imEinklang steht. Hierfür werden die Anforderungen aus Art. 6Abs. 4 DSGVO als Maßstab genommen.
Abstract: Zusammenfassung Sollen Datenverarbeitungen auf Art. 6 Abs. 1 UAbs. 1 lit. c, e DSGVO gestützt werden, bedarf es einer Rechtsgrundlage im Unionsrecht oder im Recht der Mitgliedstaaten. Aufgrund einer missglückten Konzeption des Art. 6 Abs. 2 DSGVO neben Art. 6 Abs. 3 DSGVO ist strittig, welche der beiden Öffnungsklauseln zur Festlegung von Rechtsgrundlagen der Mitgliedstaaten heranzuziehen ist und welche Anforderungen sich an diese ergeben. Dieser Beitrag beleuchtet das Verhältnis der beiden Bestimmungen und schlägt eine gegenstandsbezogene Abgrenzung als Interpretationsrichtlinie für die jeweilige Anwendung der beiden Absätze vor.