Les (Jim) Fischer
Oct 25, 2015
Table of Contents:
– Unboxing and Physical Description
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
– Testing Methodology: Adjustments, reticle size, reticle cant
– Testing Methodology: Comparative optical evaluation
I believe the greatest strength of the Vortex Company is its attunement to the market. Optics companies in general do not have the best track record with this. Many of the European companies, in particular, draw a good deal of criticism from myself and others for being completely unaware of (or are unwilling to provide) the features demanded by the market in their optics. This is perhaps not surprising from companies whose sporting optics are often handled by small divisions of what amounts to camera, eyeglass, or, not too long ago, decorative crystal companies. Vortex is quite different from this, being a small family owned company with family members who are actually actively competitive shooters and who are also regular contributors on the variety of forums dedicated to the same and similar pursuits. This degree of connection with the intended customer, at this level in the company, is both unusual and quite effective when it comes to providing compelling products. I tend to view the Razor HD II line as the child of this unparalleled industry connection and the rapidly improving manufacturing capabilities of Japanese optic OEMs, though, this somewhat shortchanges Vortex’s increasingly central role in the design of these products. They were very excited to show me the HD II line of products and particularly the 4.5-27x model. Now that I have spent some time with it, I am not surprised.
Unboxing and Physical Description:
The Razor HD II 4.5-27x56mm comes in an ample double wide textured cardboard box with a very nice cut out foam lining for protection. In with the scope are: a shade (non-honeycomb), caps (not flip, but rather slip), a lens cloth, an adjustment wrench tool, battery, bumper sticker, inspection card, and two manuals (scope and reticle). This is a very complete set of extras, although with the shade being non-honeycomb, and especially the caps slip rather than flip, I am left a bit wishing for fewer and better as I do not have much need for slip caps: they really don’t serve much purpose beyond initial shipping and, given the quality of the padding, are wholly unnecessary in that role for this scope. Taking a look over the manuals, I found them both well thought out, with clear explanations and diagrams of the basic information necessary for scope and reticle use. Along with the scope, Vortex sent me a set of their “precision matched riflescope rings” for testing. These appear to be Seekins rings. This again illustrates the market involvement of the guys at Vortex in picking a partner, as Seekins are probably the most popular basic aluminum rings.
The Razor HD II 4.5-27x56mm itself is rather nice looking, being the grayish tan the Vortex has adopted in lieu of the typical black, and having nice proportions. It is fairly short at 14.4″, but utterly obese at 48.5oz. That is not a misprint. Vortex uses steel in its turrets in place of the brass most companies use or the aluminum some use and that, combined with the 4 lens objective system, makes this optic substantially the heaviest I have ever used. This is strikingly illustrated by the fact that it actually weighs a little over twice as much as the Leupold Mark 6 3-18x I last reviewed. The controls of the Vortex are nicely laid out and designed with the side parallax knob also housing a locking illumination control system and the turrets being a non-spring-loaded pull up to adjust, push down to lock, design. The turrets are 10 mils per rev, zero stop, and actually feature three turns with a visual and tactical indicator for the second and third turn. In testing, I measured 16.8 mils from optical center to elevation stop, so it is actually conceivable that, with an extreme base, and a crazy long shot, you could possibly use that 3rd turn.
Probably the only element of the adjustment system of the Razor HD II 4.5-27x that I found unusual was the setting of the zero stop on the L-TEC turrets. You do not really set the zero stop, per se, rather, with the turret set on zero, you actually zero the scope using the center adjustment screw under the cap. You do this with the turret set screws loose. While this is not what I was used to, it actually worked quite well and was in some ways better than the typical zero the scope, loosen the screws, and move the scale to zero, procedure. The whole zeroing procedure can be accomplished using the two-sided tool provided with the scope for that purpose.
At the time of this writing, three reticle options exist in the Razor HD II 4.5-27x. I should probably chastise the relative paucity of options, but because I really like one of them, I’m not going to. The options are a mil hash, mil hash with tree, and MOA (Vortex uses true MOA not shooters’ MOA) hash with tree. I chose the EBR-2C MRAD reticle which is the mil hash with tree option. There are a number of things about this reticle that I like. Firstly, it is a fine reticle and this allows for very precise aiming. I tend to prefer this to coarser designs that, I gather, must appeal to someone. The reticle also offers a .1mil fine scale section that I can use to most accurately measure my target for ranging. This is not offered in most scopes, although I believe it should be a more common feature. Lastly, the scope offers a Christmas tree section that isn’t too busy, though, probably still a little more than I would do. Sometimes you simply don’t have the time to dial everything in and you just have to hold, so I like to have a decent Christmas tree for these less than ideal situations. All in all, I am quite satisfied with the design. I think it will appeal to competitive shooters in particular as the design features appear selected with that specifically in mind.
Comparative Optical Evaluation:
The guys at Vortex were very confident I was going to be pleased with this scope’s optical performance. This model in particular they were very proud of and they were quite eager to have me put it up against anything at all I cared to. At the time I tested the Vortex Razor HDII 4.5-27×56, I had quite a variety of optics on hand to compare side by side with it. These optics were the: USO LR-17 3.2-17×44, Nightforce SHV, Burris XTR II 4-20×50, Leupold MK6 3-18×44, and an older Zeiss Conquest 4.5-14×44. This suite of test optics varied widely in price and included both scopes aimed at the tactical market and those designed to appeal to hunters. To learn more about the exact methodology of the testing, please refer to the testing methodology section at the conclusion of the article.
Pretty early on in the optical evaluation, it became apparent that the scopes were sorting themselves into three groups. The USO and Vortex were clearly optically superior to the others. They had bigger fields of view, higher resolution, better contrast, and lower chromatic aberration. They were also very close to each other in performance. After a bit of a gap in performance, the next group was also very close to each other and included the Leupold, SHV, and Zeiss. The Burris brought up the rear, not really comparing closely with anything else in the analysis despite its price being very close to that of the SHV and almost double that of the Zeiss. Because of these clear tiers, I spent most of my time comparing this Vortex to the USO LR-17.
In many ways, parsing the optical performance of the Vortex Razor HDII 4.5-27×56 vs. the USO LR-17 is splitting hairs. Both were quite exceptional and I doubt very much anyone will be unsatisfied with the optical performance of either. Some of what we are here to do though is split hairs, and since we can probably see those hairs though either of these two scopes, we had best commence – keeping in mind the difficulty of this as the slightest changes in lighting as cloud thickness changed, or whatnot, were enough to constantly make me change and reverse opinions about who had better resolution (USO), contrast (USO), or color rendition (Vortex). A more certain judgment is that the eyebox on the Vortex was more forgiving of head position than the USO and that its edges were better. Also certain is that Vortex suffered more image loss as adjustments were moved near max adjustment range and farther from optical center. At 14 mils, things were indeed pretty hairy for the Vortex. This drop-off was precipitous rather than gradual though, so, through most of the adjustment range, things looked very good. Following the optical testing, it was apparent to me why the Vortex guys had been so excited about this Razor HD II. It represents a clear advancement over the past Razors that I have seen and appears quite competitive with scopes formerly considered in a class of performance beyond the Vortex Razor line.
Mechanical Testing and Turret Discussion:
The Razor HDII 4.5-27×56 comes with Vortex new L-Tec turrets. I am not sure why every maker now comes up with a meaningless acronym for their turrets, but L-Tec is what Vortex settled on. When compared to the turrets on Gen 1 Razors (these turrets apparently predating the appointed time at which all turrets must be given an acronym), the Gen 2 L-Tec turrets are a bit shorter and significantly larger in diameter. They are also more feature rich, including an indicator for the second and third turns, as well as a push-down-to-lock feature. The feel also seems to me to have been improved. The L-Tecs turn with a nice degree of resistance, have positive tactile and audible clicks, and generally just feel good. The zero adjustment has also been changed, as I mentioned earlier. The windage L-Tec knob is quite similar to the elevation one except that the stop is after 7 mils each direction. The Parallax knob is styled very similarly to the adjustments and is of almost identical size. Also incorporated in this knob is the illumination control. This is a nice upgrade from the last generation Razor which was fitted with a wart style unit. The new illumination control is also a push-down-to-lock unit, which should prevent any inadvertent activation and subsequent battery drain. Aside from the weight imposed upon the unit, I have no complaints about the controls. They are well laid out and the fit, finish, and feel leave nothing to be desired.
In testing, I found the reticle markings on the Vortex Razor HDII 4.5-27×56 to be correctly sized. The power ring caused no shift in POI and it returned to zero without issue when adjusted. In the tracking testing, the scope gained .1mil in 10 mils, measuring 10.1 mils on the target at 10 mils on the adjustments. At 15 mils, it had gained .2mils. Image noticeably began to deteriorate at 14mils from optical center, even though the scope adjusted out to 16.8. In windage testing, the adjustments appeared to gain .05 mils in the 4 mils of adjustment each way, although I question the veracity of this as it is much more difficult to square a target to the tester horizontally than vertically and I am not convinced the target was totally square. I believe the scope had less deviation in windage than suggested and that the target was slightly canted. Also of note is that the elevation set screws loosened up once during testing. The user should be advised to pay special attention to the tension of those. The reticle measured a .95 degree cant counter-clockwise.
When I finished my mechanical testing, I sent the results and a draft of the review to Vortex as I always do for scope testing. Usually, the response I receive from these review drafts is limited to the background section and has to do with either a request to share less insider baseball stuff or to tweak it in some other way that makes it sound like boring marketing jargon. I usually acquiesce to the first request and ignore the second since I figure there is a reason you have to pay money to print ads for people to read that stuff. It’s not very good. I do not usually get much interest in the actual testing. However, Scott was interested in having the scope back so he could test it. I thought this was interesting as he has a fancy collimator and he seemed genuinely alarmed at the relatively small deviations I found and suspicious of my testing. So, I sent the scope back with a request that he forward me his test results. His testing found a .83% deviation in tracking and substantial agreement on the reticle cant measurement. He also found no travel in the knob beyond the point at which the reticle movement stopped whereas I had noted .5 mils of travel. I began obsessing about this as I didn’t think Scott was trying to pull one over on me, but that left me with no suitable explanation for the discrepancy and I don’t like discrepancies. Scott mentioned some travel like this being a part of the design with the windage well off center, but I was quite certain that it was centered and could not be the issue. I pulled out my notebook in an effort to try to jog my now fuzzy memory and noticed that the very next note on my test page after the .5 mil travel note was “Elevation knob set screw did not come completely tight and loosened during testing, requiring a retest for return to zero”. Putting two and two together, it seems most plausible that the .5 mil of knob travel after reticle movement was probably it loosening up on me and in going from focusing on one thing to another with the return to zero testing, I failed to discover my mistaken attribution. So, two points for me on my testing and one for Scott on questioning my results. Maybe it’s not a competition, but competition makes people better so I’ll call it one and perhaps I will improve for next time. I think the takeaway for the user is that I believe Scott when he says that the deviations in this particular example were higher than normal, and even at that they were not outside of average in magnitude relative to other optics I have tested.
Summary and Conclusion:
Can a scope at $2,500 be a value proposition? I guess that is kind of a strange question as all items at all prices must be something of a value proposition being that, by definition, a customer must think that they are a value if he buys them. Regardless, in the course of my review, I found more cause to compare the HDII with brands more costly than itself rather than less costly. I found the optical performance of the HDII excellent. The Razor HDII 4.5-27×56 will definitely be in the decision matrix of those looking at scopes in the $3.5k range with the argument that $1k buys a lot of bullets, powder, and cases and, are you really giving up much in the way of optics to get all that?
The features of this scope seem particularly amenable to competition shooting. The reticle is fine, has a .1mil precision range finding section, and has a Christmas tree section for when you are just out of time or not allowed to touch the adjustments. It plays to a lot of the situations a match maker might want to concoct with these features. Along these lines, its adjustments allow you to leave them unlocked for fast adjustment and no futzing. Scott was proud to inform me that this year the Razor HDII 4.5-27×56 was the used by the significant majority of the top 15 ranked PRS shooters. Given the past tallies in dramatic favor of S&B over all comers, this indicates a dramatic market swing. The weight of the Razor HDII, while probably a non-starter for anyone who intends to hunt with the optic, is of little concern to most PRS type competitors. That weight is the only real strike I see against the HDII as a general purpose long range optic as its features and optical performance are excellent.
Here is Your Pro and Con Breakdown:
–Excellent optics, competitive with more expensive brands
-Comfortable for the eye to be behind
-Well laid out controls with excellent feel
-A reticle I like very much
-Excellent warranty and reputation for service
–At 48.5oz, it is the heaviest optic I have ever used
-Tracking on my sample was average not excellent
Testing Methodology: Adjustments, Reticle Size, Reticle Cant
When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. About .2 or so mil of deviation is allowed from center in the erector, as it is difficult to do better than this because the adjustable V-block has some play in it. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the rig.
The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at a Horus CATS 280F target 100 yds down range as measured by a quality fiberglass tape measure. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.
The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. Since up for bullet impact means down for reticle movement on the target, the inversion is necessary. With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise, test platform. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test. It’s quite a lot of fun if you are a bit of a nerd like I am. After properly setting the parallax and diopter, I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. I then reverse the process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this way, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. After concluding the testing of adjustments I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target. Lastly, this test target has a reticle cant testing function (basically a giant protractor) that I utilize to test reticle cant. This involves the elevation test as described above, a note of how far the reticle deviates horizontally from center during this test, and a little math to calculate the angle described by that amount of horizontal deviation over that degree of vertical travel.
Testing a single scope of a given model, from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots I lusted over never made it to market. I can tell you from seeing the prototype that they were a good design, but they were also a ridiculously tight tolerance design. In the end, the average time of assembly was such that it did not make sense to bring them to market as they would cost more than it was believed the market would bear. This is a particular concern for scopes that have high magnification ratios and also those that are short in length. Both of these design attributes tend to make assembly very touchy in the tolerance department. This should make you, the buyer, particularly careful to test scopes purchased that have these desirable attributes as manufacturers will face greater pressure on this type of scope to allow looser standards. If you test yours and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it: squeaky wheel gets the oil and all that.
Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center. The average deviation for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in lens position affecting both the same. In scopes that have had a reticle with error it has been of this variety, but less scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary as you move from erector center. The mean amount of reticle error is about .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, Affects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21cm at 1000 meters with a 168gr .308 load that drops 12.1 mils at that distance. That is a lot of drop and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope: a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition.
Testing Methodology: Comparative Optical Evaluation
The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained. Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire, nor the resources, to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end, no agreement will be reached on the relative weights of different factors in analysis.
The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and the reasons for that.
The central technique utilized for this testing is comparative observation. One of the test heads designed for my testing apparatus consists of five V-blocks of which four are adjustable. This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optics performance and explains the reasons why.