Current machine learning models suffer from evasion attacks such as adversarial examples. This introduces security and safety concerns that lack any clear solution. Recently, the usage of random transformations has emerged as a promising defense against the attack. Here, we hope to extend this general idea to build a defense that is secure, difficult to break even by strong adversaries, and efficient to deploy in practice. Additionally, insights gained from this work will broadly benefit scientific communities that study stochastic neural networks and robustness properties.