This paper describes a user study where humans interactively train automatic text classifiers. We attempt to replicate previous results using multiple "average" Internet users instead of a few domain experts as annotators. We also analyze user annotation behaviors to find that certain labeling actions have an impact on classifier accuracy, drawing attention to the important role these behavioral factors play in interactive learning systems.