Due to the importance of active learning, it is used in a lot of sensitive applications. But, these applications found in the environments where there are adversaries act to limit or prevent an accurate performance and destroy the normal activity of the application system. The active learning is vulnerable in the sampling stage in an adversarial environment such as the application of intrusion detection system, and attempt to sabotage the model at this stage is the guarantee for the failure or sabotage the work of the model and inaccuracy. The aim of the attacker is to maximally increase the learned model risk. In the sampling stage, the representative or most informative instances will be selected from unlabeled data to label based on the sampling strategy, but these unlabeled data not checked before being offered to the selecting process. So, the attacker will impact this stage to misguide the active learner through polluted the unlabeled instances. The contribution to this paper is studying the effect of the attacker on the strategies that used in the selection of most informative instances in active learning for network intrusion detection and whether the effect varies depending on the strategy that used. The experimental results showed that the expected model change strategy not significantly affected by the attack compared with other strategies.