Bloodgood, MichaelVijay-Shanker, KA survey of existing methods for stopping active learning (AL) reveals the needs for methods that are: more widely applicable; more aggressive in saving annotations; and more stable across changing datasets. A new method for stopping AL based on stabilizing predictions is presented that addresses these needs. Furthermore, stopping methods are required to handle a broad range of different annotation/performance tradeoff valuations. Despite this, the existing body of work is dominated by conservative methods with little (if any) attention paid to providing users with control over the behavior of stopping methods. The proposed method is shown to fill a gap in the level of aggressiveness available for stopping AL and supports providing users with control over stopping behavior.en-UScomputer sciencestatistical methodsartificial intelligencemachine learningcomputational linguisticsnatural language processinghuman language technologytext processingactive learningselective samplingquery learningbinary classificationtext classificationnamed entity classificationbiomedical named entity classificationannotation bottleneckannotation costsstopping criteriastopping methodsstabilizing predictionsagreement metricsagreement statisticscontingency table analysisstop setstop set constructionKappa statisticCohen's Kappainter-model agreementF-measureF-scoreannotation/performance tradeoffaggressive stoppingconservative stoppinguser-adjustable stoppingsupport vector machinesSVMsA Method for Stopping Active Learning Based on Stabilizing Predictions and the Need for User-Adjustable StoppingArticle