Case Study: Human-Centered Design of Art Fair App

Problem

This individual project was completed for a two-semester graduate capstone project in Human-Computer Interaction at the Indiana University School of Informatics and Computing at IUPUI.

I was challenged to identify a problem space in which to research, design, prototype, evaluate, and iterate a novel interactive system using human-centered design methods.

I decided to design a system to help artists that apply to participate in juried art fairs.

Approach

Many communities and organizations sponsor art fairs where visitors can browse and purchase artwork directly from artists. Most people are unaware of the time, cost, and effort involved for artists to participate in these fairs.

My spouse is a professional artist who earns the majority of her income from selling her artwork at juried art fairs. I observed her dealing with two information management problems related to art fairs:

I suspected many other art fair artists were experiencing the same problems. This was the inspiration for my project.

Problem Space

Over 1500 art fairs are held annually throughout the United States. These fairs are typically held on weekends, and most are held outdoors (occurring rain or shine). Each participating artist is assigned a designated booth space (typically 10 feet by 10 feet), in which to set up a booth display and tent (for outdoor fairs), which artists must provide themselves.

Most art fairs have a competitive jurying process, in order for artists to be invited to participate. Months in advance, artists must submit an application with recent images of their artwork and pay an application fee (typically $25-40). Fairs typically receive many more applicants than can be accepted. The applications are reviewed and scored by a jury selected by the art fair organizer. Applicants are then notified whether they were accepted, rejected, or placed on a waiting list. Accepted artists must confirm their participation by paying a booth fee (typically $200-500) by a specified deadline. Participating artists are responsible for their own travel expenses, such as gas, food, hotel, etc.

The total cost for an artist to participate in a particular art fair typically ranges from $250 to $1000, depending on the fees and travel expenses. Obviously, these costs must first be recouped in sales at the fair before the artist can begin to make a profit.

While some artists participate in less than 10 fairs each year, it is common for other artists to participate in 20-40 fairs every year, traveling across one or more regions of the United States. As artists are not guaranteed acceptance to any particular fair, artists typically apply to extra fairs as a backup plan (sometimes applying to multiple fairs occurring on the same dates).

Each art fair has its own unique schedule of deadlines for application, notification, confirmation, and cancellation. Keeping track of multiple deadlines for numerous art fairs is an information-intensive task for artists.

User Interviews

My target users for this project were any artists that participate in multiple juried art fairs each year. I conducted user research to better understand the artists’ tasks, needs, and expectations, in order to establish design requirements.

I conducted semi-structured interviews with 6 artists participating in a juried art fair held in Carmel, Indiana:

I asked each artist to respond to the following set of open-ended questions:

  1. Do you typically travel back home between each fair, or do you sometimes travel to multiple fairs before returning home?
  2. How do learn about an art fair that you’ve never participated in before?
  3. What information or criteria help you decide to apply to a particular art fair?
  4. What other information do you wish you could know, in order to decide whether to apply to a particular art fair?
  5. What factors help you decide to apply again to an art fair that you’ve previously participated in?
  6. Do you usually complete the evaluation forms that art fair organizers provide?
  7. How could these evaluations provide more value to you and other artists?
  8. If you were designing a website to help artists plan for art fairs and decide which fairs to apply to, what are some important information and features it should have?

Insights from Interviews

All of the interviewed artists rely primarily on word-of-mouth from other artists for information and recommendations for possible art fairs to consider in the future. The most common topic of conversation among artists at fairs is asking each other about other art fairs they have recently done or will be participating in soon.

Two artists had previously used an existing online service (Art Fair SourceBook) that provides “insider” information about art fairs; however, both artists indicated the information was limited, and the service required an annual subscription they considered too expensive (and didn’t renew).

All artists stated that past sales information is the most important criterion used to determine which fairs to apply to (and return to). Other criteria mentioned as important factors were:

However, most of the artists specifically cautioned that comparisons of sales among artists working in different mediums is like comparing “apples and oranges” because of differences in cost of materials, artwork price points, and buyer demand (e.g., jewelry is generally more popular than 2D work, etc.). They considered the most reliable sales information to be from artists working in a similar medium. Furthermore, artists rarely share specific sales amounts with one another, instead they use generic qualitative descriptions (such as “my sales were good” or “my sales were lower than I expected”).

Therefore, the artists identified that the most useful information for a system catering to artists participating in art fairs would be anonymous sales data that provides a breakdown of actual sales at each art fair for past years by artistic medium and by price-points. Artist ratings and reviews of each fair would also be valuable information.

Competitive Analysis

I observed an artist as she applied to art fairs through an existing online art fair application site (Zapplication) used by hundreds of juried art fairs. This allowed me to better understand the task flow of an existing system and to identify numerous opportunities to add utility, improve usability, and enhance the user experience.

I also investigated an existing online art fair review site (Art Fair SourceBook) to find out what types of information it provides to artists. This allowed me to identify certain opportunities to provide information that is more valuable to artists and easier to understand.

Solution

I decided that my solution would be a web-based system that provides artists with information about juried art fairs and allows artists to submit and track their applications to art fairs. I tentatively named my solution as "Art Fair Tracker."

Functional Requirements

The solution would need to allow artists to easily and efficiently:

Site Map

I created a preliminary site map to show possible pages and pathways for navigation and interaction:

Site Map for Art Fair Tracker

Site Map

Use Case Diagram

I created a use case diagram to identify all the use cases for artists and for art fair organizers. Although the solution will initially focus on the needs of artists, art fair organizers will also need to use the system and will have certain similar use cases, as well as certain complementary use cases. For example, artists need to submit applications to a fair, while art fair organizers need to evaluate the applications.

Use Case Diagram for Art Fair Tracker

Use Case Diagram

Activity Diagram

I modeled the entire art fair application process from the artist’s perspective (which encompasses multiple use cases) as an activity diagram, which revealed multiple decision points, paths, and deadlines associated with this extended process that occurs over several months.

My proposed system would not only need to support this existing process — but would need to make the steps easier for artists to monitor and complete. At any given time, each art fair is at a different point in its own parallel version of this process. Monitoring the status and deadlines for multiple art fairs can be challenging for artists.

Activity Diagram for Art Fair Application Process

Activity Diagram

Design Ideation

I generated a series of sketches to explore design ideas for presenting information in the user interface, such as search results, dashboard, etc. The sketches were shown to a target user to gather feedback on which formats or layouts would be most useful and easiest to understand.

Sketches of Alternate Ideas for Art Fair Search Results

Sketches for Search Results

Sketches of Alternate Ideas for Artist’s Dashboard

Sketches for Artist Dashboard

Wireframes

Next, I created a low-fidelity prototype as wireframes on paper templates. Margin notes were included to help describe various components and interactions for high fidelity prototyping.

Although the solution would be available for use on all devices, the wireframes were drawn for a desktop browser, as this is current context of use for artists applying to art fairs.

User interface screens included in the wireframes were:

Initial Wireframes for Home Page, Search Fairs (saved filters and custom filters), and Search Results

Wireframes for Home, Search, and Search Results

Initial Wireframes for View Fair (About, Applying, Deadlines, and Sales tabs)

Wireframes for View Fair Tabs

Initial Wireframes for View Fair (Reviews tab) and My Fairs (Dashboard, Watching, and Favorites tabs)

Wireframes for Fair Reviews Tab and My Fairs Tabs

High-Fidelity Prototype

I used the wireframes as a reference to develop an interactive, high-fidelity prototype which I coded using PHP, HTML, CSS, and JavaScript (though today I would use a prototyping tool such as InVision Studio, etc.).

I utilized Bootstrap and jQuery to help expedite the coding and provide increased interactivity. I decided to utilize PHP for page includes and to create session variables to monitor and alter page states for the subsequent usability testing.

As much as possible, I used realistic content in the high-fidelity prototype and minimized placeholder text (such as “Lorem ipsum”). This was intended to increase the validity of the subsequent usability testing.

Initial High-Fidelity Prototype for Search Fairs

Initial High-Fidelity Prototype for Search Fairs

Initial High-Fidelity Prototype for View Fair (About tab)

Initial High-Fidelity Prototype for View Fair About Tab

Initial High-Fidelity Prototype for View Fair (Sales tab)

Initial High-Fidelity Prototype for View Fair Sales Tab

Initial High-Fidelity Prototype for My Fairs (Dashboard tab)

Initial High-Fidelity Prototype for My Fairs Dashboard Tab

Usability Testing

I conducted usability testing of the high-fidelity prototype with 5 participants (all artists with art fair experience). I asked each participant to follow a Think Aloud protocol while completing 5 task scenarios with the prototype:

  1. Search for Fairs
  2. Add Fair to Watch List
  3. View Fair Details
  4. View Dashboard of Submitted Fair Applications
  5. Edit Watch List

I discovered a number of usability issues as a result of this evaluation, which led to an iterative cycle of design changes.

Design Revisions

I made the following revisions to the high-fidelity prototype to address specific issues revealed by the usability testing:

Revised High-Fidelity Prototype of Search Results

Revised High-Fidelity Prototype for Search Results

Revised High-Fidelity Prototype of View Fair (About tab)

Revised High-Fidelity Prototype for View Fair About Tab

Revised High-Fidelity Prototype of My Fairs (Dashboard tab)

Revised High-Fidelity Prototype for My Fairs Dashboard Tab

Revised High-Fidelity Prototype of My Fairs (Watch List tab)

Revised High-Fidelity Prototype for My Fairs Watch List Tab

Design and Prototyping of Additional Tasks

Next, I created activity diagrams for three additional use cases, sketched low-fidelity wireframes for these tasks, and added these tasks to the high-fidelity prototype:

Activity Diagram for Apply to Fair

Activity Diagram for Apply to Fair

Activity Diagram for Confirm Participation in Fair

Activity Diagram for Confirm Participation in Fair

Low-Fidelity Wireframes for Apply to Fair

Wireframes for Apply to Fair

Low-Fidelity Wireframes for Confirm Participation in Fair

Wireframes for Confirm Participation in Fair

High-Fidelity Prototype of Apply to Fair

High-Fidelity Prototype of Apply to Fair

High-Fidelity Prototype of Confirm Participation in Fair

High-Fidelity Prototype of Confirm Participation in Fair

High-Fidelity Prototype of Evaluate Fair After Participation

High-Fidelity Prototype of Evaluate Fair After Participation

Usability Testing - Round 2

Next, I conducted another round of usability testing with 5 participants (all artists with art fair experience). I asked each participant to follow a Think Aloud protocol while completing 5 task scenarios with the revised high-fidelity prototype:

  1. Search for Fairs
  2. View Fair Details & Apply to Fair
  3. Confirm Participation in Fair
  4. Evaluate Fair After Participation
  5. View & Manage Dashboard of Submitted Fair Applications

I designed the task scenarios to evaluate the revisions made to the prototype, as well as the newly added tasks.

Data Analysis

During this second round of usability testing, I also gathered quantitative data about each user’s experience, so that I could determine these usability metrics for the prototype:

Expectation Measure

Expectation measure is a self-reported metric of user experience developed by Albert and Dixon (2003) to compare the expected difficulty of a task rated pre-study to the experienced difficulty rated post-task.

I created a plot of the expectation measures, which shows that the users found all five tasks to be easier than expected, which is a positive finding for the user experience of the prototyped design.

Expectation Measure Graph

Task Success

Task success was categorized using a 4-point level of success scale (Tullis & Albert, 2013). I created a stacked bar chart representing the frequency distribution of the participants’ levels of success for each task.

Task Success Chart

All the participants were able to complete all the tasks. The participants had no problems with Task 3, Task 4, and Task 5. One participant had a minor problem with Task 1 before completing the task. For Task 2, one participant had a minor problem, and two participants had a major problem, though eventually all of them did complete the task.

For Task 2, the problems occurred when the participants had to find their past sales history for a specific fair. This information was located on a newly added tab on the fair’s page. The “View Fair” page includes a lot of information divided among several tabs, so it would be valuable to reconsider the content organization for this page. The participants had no problems completing the remainder of this task, which involved submitting an application to this specific fair.

Lostness

Lostness is a performance metric of efficiency developed by Smith (1996) to study user navigation on websites when completing tasks. Lostness requires three input values:

The lostness (L) score is then calculated using the formula:  L = √ ((N / S – 1)2 + (R / N – 1)2)

Users with scores less than 0.4 do not appear to be lost, whereas users with scores greater than 0.5 show clear signs of being lost (Smith, 1996).

Lostness by Participant and Task
User Task 1 Task 2 Task 3 Task 4 Task 5
U1 0.00 0.00 0.00 0.00 0.00
U2 0.17 0.71 0.00 0.00 0.00
U3 0.00 0.00 0.00 0.25 0.00
U4 0.00 0.66 0.00 0.00 0.00
U5 0.00 0.40 0.00 0.00 0.00
Mean 0.03 0.35 0.00 0.05 0.00

Two participants have lostness scores for Task 2 that confirm they were lost during the task, while a third participant had a borderline score. These results are consistent with the levels of task success, in which these same participants encountered problems finding their past sales history for a fair (though all were eventually successful at completing the task).

System Usability Scale

The System Usability Scale (SUS) is a widely-used, reliable metric developed by Brooke (1996) that is administered post-study to users. SUS has a maximum score of 100, with scores greater than 70 considered acceptable (Bangor et al., 2009).

I calculated the SUS scores based on each participant’s responses. The SUS scores from every participant were 70 or higher with a mean of 88, indicating that overall the participants rated the application positively and considered it usable.

System Usability Scale
Participant SUS Score
U1 95.0
U2 82.5
U3 100.0
U4 92.5
U5 70.0
Mean 88.0

Reflection

This project was invaluable for demonstrating that I could apply an iterative, human-centered design approach to create a valuable and usable product from the ground up. While I certainly enjoyed the autonomy that this project offered, I also recognize that products and services benefit from having input from a diverse team of designers.

Test Drive Interactive Prototype

Open Prototype

Suggested Steps for Test Drive:

  1. On home page, click log in button to see your Dashboard (as user Ansel Adams)
  2. In the Dashboard, you can:
    • Check the current status and next deadline for your art fair applications
    • Check your Watch list, Favorites list, and Archive
    • Confirm Participation in Covington Art Fair
    • Evaluate the Indiana Artisan Marketplace
  3. Next, Search Fairs using various criteria (however, search results are static in prototype)
    • In the 2nd page of search results, click on Penrod Arts Fair to view it.
  4. On the Penrod Arts Fair page, you can:
    • View the fair information on the various tabs
    • Apply to Penrod Arts Fair

References

Albert, W., & Dixon, E. (2003). Is this what you expected? The use of expectation measures in usability testing. Proceedings of Usability Professionals Association 2003 Conference, Scottsdale, AZ.

Bangor, A., Kortum, P., & Miller, J.A. (2009). Determining what individual SUS scores mean: adding an adjective rating scale. Journal of Usability Studies, 4(3), 114-123.

Brooke, J. (1996). SUS: a quick and dirty usability scale. In P.W. Jordan, B. Thomas, B.A. Weerdmeester & I.L. McClelland (Eds.), Usability evaluation in industry. London: Taylor & Francis.

Smith, P.A. (1996). Towards a practical measure of hypertext usability. Interacting with Computers, 8(4), 365-381.

Tullis, T., & Albert, B. (2013). Measuring the user experience: Collecting, analyzing, and presenting usability metrics (2nd ed., pp. 70-73). Waltham, MA: Morgan Kaufmann.

NEXT: Contextual Design of Shopping App

Next Case Study