This individual project was completed for a two-semester graduate capstone project in Human-Computer Interaction at the Indiana University School of Informatics and Computing at IUPUI.
I was challenged to identify a problem space in which to research, design, prototype, evaluate, and iterate a novel interactive system using human-centered design methods.
I decided to design a system to help artists that apply to participate in juried art fairs.
Many communities and organizations sponsor art fairs where visitors can browse and purchase artwork directly from artists. Most people are unaware of the time, cost, and effort involved for artists to participate in these fairs.
My spouse is a professional artist who earns the majority of her income from selling her artwork at juried art fairs. I observed her dealing with two information management problems related to art fairs:
I suspected many other art fair artists were experiencing the same problems. This was the inspiration for my project.
Over 1500 art fairs are held annually throughout the United States. These fairs are typically held on weekends, and most are held outdoors (occurring rain or shine). Each participating artist is assigned a designated booth space (typically 10 feet by 10 feet), in which to set up a booth display and tent (for outdoor fairs), which artists must provide themselves.
Most art fairs have a competitive jurying process, in order for artists to be invited to participate. Months in advance, artists must submit an application with recent images of their artwork and pay an application fee (typically $25-40). Fairs typically receive many more applicants than can be accepted. The applications are reviewed and scored by a jury selected by the art fair organizer. Applicants are then notified whether they were accepted, rejected, or placed on a waiting list. Accepted artists must confirm their participation by paying a booth fee (typically $200-500) by a specified deadline. Participating artists are responsible for their own travel expenses, such as gas, food, hotel, etc.
The total cost for an artist to participate in a particular art fair typically ranges from $250 to $1000, depending on the fees and travel expenses. Obviously, these costs must first be recouped in sales at the fair before the artist can begin to make a profit.
While some artists participate in less than 10 fairs each year, it is common for other artists to participate in 20-40 fairs every year, traveling across one or more regions of the United States. As artists are not guaranteed acceptance to any particular fair, artists typically apply to extra fairs as a backup plan (sometimes applying to multiple fairs occurring on the same dates).
Each art fair has its own unique schedule of deadlines for application, notification, confirmation, and cancellation. Keeping track of multiple deadlines for numerous art fairs is an information-intensive task for artists.
My target users for this project were any artists that participate in multiple juried art fairs each year. I conducted user research to better understand the artists’ tasks, needs, and expectations, in order to establish design requirements.
I conducted semi-structured interviews with 6 artists participating in a juried art fair held in Carmel, Indiana:
I asked each artist to respond to the following set of open-ended questions:
All of the interviewed artists rely primarily on word-of-mouth from other artists for information and recommendations for possible art fairs to consider in the future. The most common topic of conversation among artists at fairs is asking each other about other art fairs they have recently done or will be participating in soon.
Two artists had previously used an existing online service (Art Fair SourceBook) that provides “insider” information about art fairs; however, both artists indicated the information was limited, and the service required an annual subscription they considered too expensive (and didn’t renew).
All artists stated that past sales information is the most important criterion used to determine which fairs to apply to (and return to). Other criteria mentioned as important factors were:
However, most of the artists specifically cautioned that comparisons of sales among artists working in different mediums is like comparing “apples and oranges” because of differences in cost of materials, artwork price points, and buyer demand (e.g., jewelry is generally more popular than 2D work, etc.). They considered the most reliable sales information to be from artists working in a similar medium. Furthermore, artists rarely share specific sales amounts with one another, instead they use generic qualitative descriptions (such as “my sales were good” or “my sales were lower than I expected”).
Therefore, the artists identified that the most useful information for a system catering to artists participating in art fairs would be anonymous sales data that provides a breakdown of actual sales at each art fair for past years by artistic medium and by price-points. Artist ratings and reviews of each fair would also be valuable information.
I observed an artist as she applied to art fairs through an existing online art fair application site (Zapplication) used by hundreds of juried art fairs. This allowed me to better understand the task flow of an existing system and to identify numerous opportunities to add utility, improve usability, and enhance the user experience.
I also investigated an existing online art fair review site (Art Fair SourceBook) to find out what types of information it provides to artists. This allowed me to identify certain opportunities to provide information that is more valuable to artists and easier to understand.
I decided that my solution would be a web-based system that provides artists with information about juried art fairs and allows artists to submit and track their applications to art fairs. I tentatively named my solution as "Art Fair Tracker."
The solution would need to allow artists to easily and efficiently:
I created a preliminary site map to show possible pages and pathways for navigation and interaction:
Site Map for Art Fair Tracker
I created a use case diagram to identify all the use cases for artists and for art fair organizers. Although the solution will initially focus on the needs of artists, art fair organizers will also need to use the system and will have certain similar use cases, as well as certain complementary use cases. For example, artists need to submit applications to a fair, while art fair organizers need to evaluate the applications.
Use Case Diagram for Art Fair Tracker
I modeled the entire art fair application process from the artist’s perspective (which encompasses multiple use cases) as an activity diagram, which revealed multiple decision points, paths, and deadlines associated with this extended process that occurs over several months.
My proposed system would not only need to support this existing process — but would need to make the steps easier for artists to monitor and complete. At any given time, each art fair is at a different point in its own parallel version of this process. Monitoring the status and deadlines for multiple art fairs can be challenging for artists.
Activity Diagram for Art Fair Application Process
I generated a series of sketches to explore design ideas for presenting information in the user interface, such as search results, dashboard, etc. The sketches were shown to a target user to gather feedback on which formats or layouts would be most useful and easiest to understand.
Sketches of Alternate Ideas for Art Fair Search Results
Sketches of Alternate Ideas for Artist’s Dashboard
Next, I created a low-fidelity prototype as wireframes on paper templates. Margin notes were included to help describe various components and interactions for high fidelity prototyping.
Although the solution would be available for use on all devices, the wireframes were drawn for a desktop browser, as this is current context of use for artists applying to art fairs.
User interface screens included in the wireframes were:
Initial Wireframes for Home Page, Search Fairs (saved filters and custom filters), and Search Results
Initial Wireframes for View Fair (About, Applying, Deadlines, and Sales tabs)
Initial Wireframes for View Fair (Reviews tab) and My Fairs (Dashboard, Watching, and Favorites tabs)
I used the wireframes as a reference to develop an interactive, high-fidelity prototype which I coded using PHP, HTML, CSS, and JavaScript (though today I would use a prototyping tool such as InVision Studio, etc.).
I utilized Bootstrap and jQuery to help expedite the coding and provide increased interactivity. I decided to utilize PHP for page includes and to create session variables to monitor and alter page states for the subsequent usability testing.
As much as possible, I used realistic content in the high-fidelity prototype and minimized placeholder text (such as “Lorem ipsum”). This was intended to increase the validity of the subsequent usability testing.
Initial High-Fidelity Prototype for Search Fairs
Initial High-Fidelity Prototype for View Fair (About tab)
Initial High-Fidelity Prototype for View Fair (Sales tab)
Initial High-Fidelity Prototype for My Fairs (Dashboard tab)
I conducted usability testing of the high-fidelity prototype with 5 participants (all artists with art fair experience). I asked each participant to follow a Think Aloud protocol while completing 5 task scenarios with the prototype:
I discovered a number of usability issues as a result of this evaluation, which led to an iterative cycle of design changes.
I made the following revisions to the high-fidelity prototype to address specific issues revealed by the usability testing:
Revised High-Fidelity Prototype of Search Results
Revised High-Fidelity Prototype of View Fair (About tab)
Revised High-Fidelity Prototype of My Fairs (Dashboard tab)
Revised High-Fidelity Prototype of My Fairs (Watch List tab)
Next, I created activity diagrams for three additional use cases, sketched low-fidelity wireframes for these tasks, and added these tasks to the high-fidelity prototype:
Activity Diagram for Apply to Fair
Activity Diagram for Confirm Participation in Fair
Low-Fidelity Wireframes for Apply to Fair
Low-Fidelity Wireframes for Confirm Participation in Fair
High-Fidelity Prototype of Apply to Fair
High-Fidelity Prototype of Confirm Participation in Fair
High-Fidelity Prototype of Evaluate Fair After Participation
Next, I conducted another round of usability testing with 5 participants (all artists with art fair experience). I asked each participant to follow a Think Aloud protocol while completing 5 task scenarios with the revised high-fidelity prototype:
I designed the task scenarios to evaluate the revisions made to the prototype, as well as the newly added tasks.
During this second round of usability testing, I also gathered quantitative data about each user’s experience, so that I could determine these usability metrics for the prototype:
Expectation measure is a self-reported metric of user experience developed by Albert and Dixon (2003) to compare the expected difficulty of a task rated pre-study to the experienced difficulty rated post-task.
I created a plot of the expectation measures, which shows that the users found all five tasks to be easier than expected, which is a positive finding for the user experience of the prototyped design.
Task success was categorized using a 4-point level of success scale (Tullis & Albert, 2013). I created a stacked bar chart representing the frequency distribution of the participants’ levels of success for each task.
All the participants were able to complete all the tasks. The participants had no problems with Task 3, Task 4, and Task 5. One participant had a minor problem with Task 1 before completing the task. For Task 2, one participant had a minor problem, and two participants had a major problem, though eventually all of them did complete the task.
For Task 2, the problems occurred when the participants had to find their past sales history for a specific fair. This information was located on a newly added tab on the fair’s page. The “View Fair” page includes a lot of information divided among several tabs, so it would be valuable to reconsider the content organization for this page. The participants had no problems completing the remainder of this task, which involved submitting an application to this specific fair.
Lostness is a performance metric of efficiency developed by Smith (1996) to study user navigation on websites when completing tasks. Lostness requires three input values:
The lostness (L) score is then calculated using the formula: L = √ ((N / S – 1)2 + (R / N – 1)2)
Users with scores less than 0.4 do not appear to be lost, whereas users with scores greater than 0.5 show clear signs of being lost (Smith, 1996).
User | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 |
---|---|---|---|---|---|
U1 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
U2 | 0.17 | 0.71 | 0.00 | 0.00 | 0.00 |
U3 | 0.00 | 0.00 | 0.00 | 0.25 | 0.00 |
U4 | 0.00 | 0.66 | 0.00 | 0.00 | 0.00 |
U5 | 0.00 | 0.40 | 0.00 | 0.00 | 0.00 |
Mean | 0.03 | 0.35 | 0.00 | 0.05 | 0.00 |
Two participants have lostness scores for Task 2 that confirm they were lost during the task, while a third participant had a borderline score. These results are consistent with the levels of task success, in which these same participants encountered problems finding their past sales history for a fair (though all were eventually successful at completing the task).
The System Usability Scale (SUS) is a widely-used, reliable metric developed by Brooke (1996) that is administered post-study to users. SUS has a maximum score of 100, with scores greater than 70 considered acceptable (Bangor et al., 2009).
I calculated the SUS scores based on each participant’s responses. The SUS scores from every participant were 70 or higher with a mean of 88, indicating that overall the participants rated the application positively and considered it usable.
Participant | SUS Score |
---|---|
U1 | 95.0 |
U2 | 82.5 |
U3 | 100.0 |
U4 | 92.5 |
U5 | 70.0 |
Mean | 88.0 |
This project was invaluable for demonstrating that I could apply an iterative, human-centered design approach to create a valuable and usable product from the ground up. While I certainly enjoyed the autonomy that this project offered, I also recognize that products and services benefit from having input from a diverse team of designers.
Suggested Steps for Test Drive:
Albert, W., & Dixon, E. (2003). Is this what you expected? The use of expectation measures in usability testing. Proceedings of Usability Professionals Association 2003 Conference, Scottsdale, AZ.
Bangor, A., Kortum, P., & Miller, J.A. (2009). Determining what individual SUS scores mean: adding an adjective rating scale. Journal of Usability Studies, 4(3), 114-123.
Brooke, J. (1996). SUS: a quick and dirty usability scale. In P.W. Jordan, B. Thomas, B.A. Weerdmeester & I.L. McClelland (Eds.), Usability evaluation in industry. London: Taylor & Francis.
Smith, P.A. (1996). Towards a practical measure of hypertext usability. Interacting with Computers, 8(4), 365-381.
Tullis, T., & Albert, B. (2013). Measuring the user experience: Collecting, analyzing, and presenting usability metrics (2nd ed., pp. 70-73). Waltham, MA: Morgan Kaufmann.