Exam AI – something I built during the MeDo hackathon

The author didn't join the MeDo hackathon with a polished grand idea—just a desire to build something they'd actually use. Exam AI tackles the messy reality of exam studying: you read notes, search for answers, forget half, then cram at the last minute. Instead of passive reading, Exam AI makes you actively think. Give it a topic and it generates exam-style questions for you to answer, then provides detailed explanations—not just the correct answer. It adapts to your interaction, letting you dig deeper into unclear concepts through a conversational back-and-forth. What surprised the author most was how quickly the project came together without getting stuck in setup or overengineering. Key challenges: AI output quality depends heavily on how you phrase things, it's easy to overbuild, and making explanations genuinely useful is harder than it sounds. Future plans include personalization by adapting to each user's weak areas.

Background and Context The development of Exam AI emerged directly from the constraints and creative energy of the MeDo hackathon, a competitive environment that prioritizes rapid prototyping over polished commercial viability. The author, Eszter Kovacs, approached this event not with a pre-packaged, grandiose business plan, but with a pragmatic desire to construct a utility that addressed a personal and widespread inefficiency in academic preparation. The traditional model of exam studying is characterized by passive consumption: students read through dense notes, perform fragmented searches for specific facts, and often forget the majority of the material until the final moments before an assessment. This cycle leads to last-minute cramming, a method that is widely recognized as ineffective for long-term retention and deep understanding. Exam AI was conceived as a direct counter-measure to this passive behavior, aiming to transform the study process from a one-way reception of information into an active, cognitive engagement. The core philosophy behind Exam AI is rooted in the pedagogical concept of "active recall." Rather than allowing users to simply re-read their materials, the application forces the user to retrieve information from memory. The workflow is straightforward yet powerful: a user inputs a specific topic or subject area they wish to review, and the system immediately generates a set of exam-style questions. These are not simple true/false queries but structured questions designed to mimic the rigor of actual academic assessments. This initial interaction serves as a diagnostic tool, revealing what the user knows and, more importantly, what they do not. By placing the burden of generation on the AI, the tool removes the friction of creating study materials, allowing students to focus entirely on the act of answering and learning. ## Deep Analysis The technical architecture of Exam AI distinguishes itself through its dual-output mechanism. When a user answers a generated question, the system does not merely provide a binary correct/incorrect validation. Instead, it generates detailed, contextual explanations for the answer. This feature addresses a critical gap in many existing AI study tools, which often fail to explain the "why" behind a correct response. The AI is tasked with breaking down complex concepts, ensuring that the user understands the underlying logic rather than just memorizing a fact. Furthermore, the application supports multi-turn conversational interactions. If a user finds an explanation unclear or wishes to explore a specific aspect of a topic in greater depth, they can engage in a back-and-forth dialogue with the AI. This adaptive capability allows the tool to act as a tutor, adjusting the complexity and focus of its responses based on the user's immediate feedback and questions, thereby creating a personalized learning loop. Despite the apparent simplicity of the concept, the development process presented significant technical hurdles, particularly in the realm of prompt engineering and scope management. The author noted that the quality of the AI's output is heavily dependent on the precision of the prompts provided. Crafting instructions that yield genuinely useful, nuanced explanations rather than generic or superficial responses required extensive iteration and fine-tuning. This challenge is compounded by the risk of "overengineering." In the rush to build a comprehensive tool, it is easy to add unnecessary features that bloat the application and distract from its core utility. The author had to exercise strict boundary control, resisting the urge to build a full-fledged learning management system and instead focusing on the minimal viable product that effectively facilitated active recall. The surprise element of the project was not just its functionality, but the speed at which it came together, proving that disciplined scoping can lead to rapid, high-quality development even within the tight timeframe of a hackathon. ## Industry Impact Exam AI represents a microcosm of the broader shift in the EdTech sector toward AI-driven, personalized learning experiences. Traditional educational technology has often struggled to move beyond digitized textbooks and static quizzes. Exam AI leverages the generative capabilities of modern large language models to create dynamic, on-demand study aids that adapt to individual needs. This approach has significant implications for how students prepare for high-stakes exams, potentially democratizing access to high-quality tutoring. By automating the creation of practice questions and detailed explanations, the tool lowers the barrier to effective study strategies, which traditionally required either significant self-discipline or expensive private tutoring. The emphasis on active recall aligns with current cognitive science research, suggesting that tools facilitating this method may lead to better academic outcomes compared to passive review methods. The project also highlights the evolving role of independent developers in the AI ecosystem. Rather than competing with large tech giants on infrastructure, developers like Kovacs are focusing on niche, high-impact applications that solve specific pain points. The success of Exam AI demonstrates that valuable AI tools can be built rapidly by individuals who understand both the technical capabilities of LLMs and the practical needs of end-users. This trend encourages a more diverse and innovative landscape in educational software, where agility and user-centric design can outpace the slower, more bureaucratic development cycles of larger corporations. The focus on "active thinking" over passive consumption sets a new standard for what an AI study companion should be, pushing the industry to prioritize depth of understanding over mere information retrieval. ## Outlook Looking ahead, the development roadmap for Exam AI includes the integration of adaptive personalization algorithms. The current version provides a generalized experience based on the user's input, but future iterations aim to track individual performance over time. By analyzing which topics or question types a user consistently struggles with, the AI will be able to identify knowledge gaps and proactively generate targeted practice questions to reinforce those weak areas. This shift from reactive to proactive learning will transform the tool from a simple question generator into a comprehensive academic assistant that evolves with the user. The goal is to create a feedback loop where the AI continuously refines its teaching strategy based on the user's progress, ensuring that study time is spent efficiently on the most critical areas for improvement. Additionally, the author plans to refine the prompt engineering techniques to further enhance the quality and relevance of the generated explanations. As the user base grows, the system will need to handle a wider variety of subjects and academic levels, requiring more sophisticated contextual understanding. The lessons learned during the MeDo hackathon—particularly regarding the importance of avoiding overengineering and the power of clear, iterative prompt design—will inform the long-term architecture of the project. Exam AI stands as a testament to the potential of hackathon-born projects to evolve into sustainable, impactful educational tools, bridging the gap between AI experimentation and practical, everyday utility for students worldwide.