Draft Rubric: Mandala Browser
by David Morgan and Sarah Scalet
The Mandala Browser is a software that visibly links different search results. For example, using text from William Shakespeare’s Romeo and Juliet, the prototype connects word usage and speakers in text. By imputing specific terms and conditions into the search function, the user can see the quantity of a word’s usage in the text. The tool also develops visual relationships between specific words in the play and the words’ speakers. Although the prototype draws connections between words in a text, the creators boast the software’s tremendous flexibility. According to the Mandala Browser’s website, “The design provides enormous flexibility in terms of the number of criteria used, the number of items represented, and the types of items represented.” The software’s flexibility provides an ideal tool for hypothesis building within the academic realm. It directly advances the study of literature, as evidenced by the prototype’s use of Romeo and Juliet. However, it could theoretically be expanded to any number of academic fields that include textual study.
The Mandala Browser is a piece of software designed to help make connections between different texts. It is written in Java and runs as a self-contained applet. On the back-end the data is stored in flexible XML files. When investigating a text, the user codes the attributes they wish to compare into the XML file and loads it into the Java applet. The project was developed by a research team located in various universities throughout Canada. Notably, the browser’s website is outdated. Publications from 2011 are the most recent publications listed on the website; furthermore, potential book proposals are due by 2012. At this point, guides for using Mandala in the classroom exist online [http://maker.uvic.ca/mandala/]. Despite the outdated website, the browser seems to be relevant and in use.
[Test-run of the program. Image source: screen shot from David Morgan’s computer.]
It would be nearly impossible to create one rigid evaluation that would encompass every Digital Humanities project. The potential scope is simply too great. So rather than creating a master evaluation below, we have laid out a few simple guidelines and an even simpler grading scale. For each element, a project can either get a thumbs down, a neutral flat palm, or a thumbs up. The thumbs down indicates that the project is lacking greatly in that area or that it is absent altogether. A flat palm is akin to a shrug – the project may have some of the features but it doesn’t execute or totally embrace the idea. Simply put, its bland. The coveted thumbs up is a splash of color. It indicates that the project gets it and they’re doing it in a noteworthy or impressive way.
1. Innovation – Does this project do something new or valuable? In the modern era our lives are clogged with information. A good project should offer meaningful insight or a novel perspective. [Mandala seems to deserve a thumbs up for contributing to academic study in a new way.]
2. Approachability – Can the user easily understand the project and its context? [After a few trials periods, we finally got the software working. Downward thumb for the learning curve in using the software.]
3. Intentionality – Good design can convey as much information as text. From an aesthetic and functional perspective how well is the project designed? Does the presentation help or hinder the message? [Thumbs up for being easy to read and understand once the user intentionally chooses distinct colors.]
4. Engagement – A proper DH project isn’t uni-directional. Does the user have a meaningful method of using, contributing to, or engaging with the project. This make take the form of user submitted content, discussion forms, teacher’s guidelines, or open data available for future study. [Thumbs up because the software can be adapted to study different connections using different text based on its flexible design.]
Image Source: http://leadinganswers.typepad.com/.