Feb 232019
 

Les (Jim) Fischer
BigJimFish
Written:  Jan 11, 2019

Athlon Midas TAC 6-24x50mm on Kelbly Atlas Tactical

Table of Contents:
– Background
– Unboxing and Physical Description
– Reticle
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
– Testing Methodology:  Adjustments, Reticle Size, Reticle Cant
– Testing Methodology:  Comparative Optical Evaluation

Background:

            In reviewing both the Athlon Ares BTR 4.5-27x50mm and this Athlon Midas TAC 6-24x50mm this year, I have an unusual situation. Both scopes are from the same brand and, at $849 and $629 street, respectively, I would consider them to be in the same price bracket. This suggests that there will be a lot of overlap in the potential buyers of each scope and begs a great deal of direct comparison as well as an unavoidable degree of re-use of text when discussing things in common such as the background of the company, the near identical manuals, or the very similar adjustment design. My apologies for the overlap, such as the rest of this background section.

            Athlon is one of the newest players in the sport optics industry and it turned some heads a few years ago as it seemed to be born, fully formed, with a complete line of scopes at a wide variety of price points. This is because, in some respects, the apple doesn’t fall far from the tree, or runner, depending on your metaphor (I know, stretching it). Athlon was founded by (and is still run by) some Bushnell alums. As such, Athlon had the experience and contacts of a major market player at its start. Its business model also essentially differs little from that of its parent. They are both importers and brands – not manufacturers. As with most importers, they offer a broad selection of product lines and price points sourced from a variety of OEMs.

Where Athlon departs from many of the importers, or at least from its parent, is that it is smaller, more nimble, and flatter in terms of corporate organization. The principal players of Athlon are on the floor at tradeshows talking to customers, industry players, and grumpy writers. This is not really a large or small company thing so much as a philosophic thing. Huge companies, like Kahr or Benchmade for instance, can, and do, have founders on the floor. Much smaller and more bureaucratic companies often do not. This shows in the timeliness of the features Athlon puts in scopes:  they have their ear to the ground. My take is that the plan is basically to win on three things:  cost, service, and up-to-date feature sets. So far they appear to be mostly delivering on these points. Athlon scopes are generally less costly than other brands coming from the same OEM, they seem to be building a solid reputation for customer service, and their features are up to date with market trends.

Unboxing and Physical Description:

            Unboxing the Athlon Midas TAC 6-24x50mm reveals the exact same sparse accessories found with the big brother Ares scope:  no caps or covers, just a lens cloth, battery, manual, and invitation for the customer to review the product online. I think I’ll do that.

Athlon Midas TAC 6-24x50mm unboxing

The scope itself is on the smaller side for this magnification range. It features a mid size 50mm objective and is a little longer than the Ares at 14.6″, but slightly lighter at 26.3oz. I am a fan of smaller objective, lighter weight optics. I have the general opinion that traditional objective sizes have never adjusted to the incredible light transmission gains that current generation lens coatings have made possible and this has left many scope makers manufacturing huge, heavy, scopes that gain little additional low light capabilities for all that added weight.

The Midas features a large uncapped 10 mil per turn zero stop elevation adjustment. The design and appearance of the adjustment is nearly identical to that of the Ares though the Midas knob does not include the extra O-ring that the Ares has. The resulting feel is similar to the Ares with the ring removed though the Midas is a bit stiffer. Specifically, it has a little higher ratio of click force / rotation force between clicks. This makes it a little harder to rotate just one click at a time without going over. I would not characterize it as too difficult a scope in this regard, but you are going to occasionally over-rotate with it.  I have a slight overall feel preference for the Ares elevation adjustment. I would characterize the Midas as having an average elevation click stiffness with the Ares on the squishier side. Both have a feel I would characterize as fine but neither excellent. These differences between two scopes’ adjustment feel on exceptionally similar elevation knobs serves to highlight just how touchy a thing the adjustment of feel on the clicks of a scope can be.

In a departure from the Ares, the Midas has a smaller capped windage knob. This knob is a 10 mils per turn knob that is marked 1-5 in each direction. It has good feel and is a nice compromise between a hunting design and a tactical design. That is to say, you could really use it either way and be pleased. The power ring and euro style diopter on the Midas are on the looser side of average with a parallax knob I would classify as perfect.

Looking at the features of the Midas TAC elevation knob specifically, it is 10 mil per turn and features both a zero stop system that is a little different from what I have seen before and markings that can be repositioned. Repositioning the markings is done in a common way. They are located on an outer sleeve that pops off and can be repositioned after removal of a screw. This sleeve is toothed with enough teeth that its markings will properly line up with the actual detents instead of landing between as some others have done. The zero stop system is one that the Midas TAC shares with the Ares BTR but I have not seen on other optics – I am embarrassed to admit that I did not even notice that these scopes had one until halfway through the Ares review. As is common, the whole elevation knob on the Midas screws up and down as the adjustment it rotated. This attribute forms the basis of both the zero stop and the simple scribed turn indicator. The zero stop consists of a brass disc they refer to as the “zero stop locking plate” located under the removable outer adjustment sleeve. This disc can be repositioned using three set screws. So, basically, you zero the scope, remove the outer sleeve, loosen the set screws, and move the disc so that it is lying flat on the saddle with its stop protrusion immediately to the right of the stop protrusion on the scope saddle. You then gently tighten the set screws and replace the sleeve and its screw with the proper alignment of the zero. This zero stop is very inexpensive to make in addition to being quite functional. It also has the same advantage as most plunger style systems in that you can set it independently of the markings to give you a few tenths of adjustment below the zero if you want. It is a well designed system and I’m a fan.

The Mil-stop system used on the Athlon Ares BTR and Midas TAC scopes

The manual included with the Midas scope is the same mixed bag as the Ares and varies little in its text. It includes pretty good sections on focusing, setting eye relief, bore sighting, zeroing (although it mistakenly refers to the “zero stop locking plate” as black when it is actually brass) and a lesser section on mounting. It also has nice dimensioned diagrams of the reticle. There is some lack of clarity in the manual regarding if Athlon’s MOA based scopes are calibrated to true MOA (TMOA) which is 1.047″ @ 100 yds or shooters’ MOA / inches per hundred yards (SMOA / IPHY)  which is 1.0″ @ 100 yds. This is very important as 4.7% error is a lot of error when making long distance calculations. Upon speaking with the guys at Athlon, I found that their adjustments and reticles are calibrated in TMOA. The manual section on troubleshooting tips for accuracy is the most problematic section as it has some poor enough advice in it that I felt the need to write a whole paragraph about the manual. The section advises the shooter to “use a bench rest or sandbag to support the barrel and stock”. Force on the barrel deflects the barrel, causing shots to stray and should be avoided – not encouraged – when seeking to shoot with accuracy. Support of the barrel with sandbags is actually often the cause of inaccuracy and not a solution for it. The manual also says to make sure there is “no excessive grease inside of the barrel”. This suggests to me that there might be a good reason to have a proper amount of grease in the barrel and a novice shooter might then, in error, apply grease to such. Though grease is sometimes used in a barrel for long term storage, there should never be any grease in a barrel when you are shooting. Grease in a barrel can not only cause inaccuracies, but can also cause dangerous and/or unbalanced pressures in a barrel. Grease does not protect a barrel from wear either, as wear is overwhelmingly a product of erosion in the throat of a barrel from powder burning there and not a product of friction with the bullet over the length of the barrel.

Reticle:

            The Athlon Midas TAC 6-24x50mm is available in two mil reticle options, the APRS2 and APRS3, as well as one MOA option, the APLR4. The two mil options are very similar to one another with the APRS3 being comprised of the APRS2 plus a Christmas tree section graduated in one mil increments vertically and .2 mil increments horizontally. The APRS2 is a typical mil hash reticle featuring a floating dot center and .2 mil increments horizontally out to 6 mils then .5 mil increments after that out to 9 mils, at which point there is just a thick crosshairs. Vertically, the reticle is graduated in .2 mil increments for just one mil. At that point, the top half is graduated in .5 mil increments out to 9 mils and then it becomes a thick crosshairs, while the bottom half is graduated in .5 mil increments out to 7 mils where it goes back to .2 mil increments until 10 mils, at which point it becomes a thick crosshairs. While there is probably some rationale for the alternating use of a .2 mil graduation system and a .5 mil one, that is not fully explained anywhere and I likely wouldn’t agree with it over the consistency of sticking with the .2 mil increments throughout, though it probably doesn’t matter a whole heck of a lot anyway. For what it’s worth, I think .2 mil graduations are a pretty good choice on a scope of this power range. Both vertical and horizontal crosshairs are numbered every 2 mils and are on the thinner than average side when it comes to line thickness. Generally, I think users will find both the APRS2 and APRS 3 reticles good choices with the user’s preference regarding a Christmas tree section the dividing factor on choice.

When tested, the reticle showed a very slight cant of ~.5 degrees counter-clockwise relative to the adjustments. This is not an amount of deviation I would be concerned about.

Horus CATS 280F test target through Athlon Midas TAC 6-24x50mm scope with APRS2 FFP MIL reticle.

Comparative Optical Evaluation:

            For optical comparisons to this Athlon Midas TAC 6-24x50mm, I had the other scopes in this series of sub $1K FFP mil/mil precision rifle scope review, the Athlon Ares BTR 4.5-27×50 FFP IR Mil and Sightron SIIISS624x50LRFFP/MH, as well as two that have been used as comparisons by me in previous reviews for context, the Leupold Mk 6 3-18×44 and my old (and now discontinued) Zeiss conquest 4.5-14×44. All of these scopes were lined up together on a five slot adjustable v-block and evaluated using the procedure outlined in the methodology section at the end of this review. This same methodology is used on all long range scope evaluations and has been for several years now.

I have never before had a set of five scopes with such generally close optical performance. Usually, scopes somewhat sort themselves into performance tiers with higher tier scopes being better than lower tier scopes in pretty much all characteristics. That was not even remotely the case with this lineup. No scope was always first or last when evaluating particular performance parameters and the order of the scopes’ rankings changed with pretty much every particular parameter being evaluated. That being said, the Midas was, on balance, on the lower side of average for the group and was bested by its Athlon Ares stablemate in almost all respects. The best aspects of optical performance for the Midas were its larger than group average field of view and better than average contrast. Its weakest points were eyebox, chromatic aberration, and pincushion / barrel distortion. None of these performance aspects were what I would consider problematic, but they were areas where it lagged behind the comparison scopes and, most importantly, its sibling. The Midas scope performed closer to middle of the pack in resolution, stray light handling, and depth of field. Edge to edge clarity was excellent on all the scopes tested and no scope displayed any tunneling.

            It is worth noting here that the Midas is the least expensive scope in this lineup by a significant margin. In that respect, bully to the Midas for keeping up and even beating the average in a few aspects. That is not how I feel about it overall though. Being a 6-24x scope, the Midas is much simpler to do well than its 4.5-27x Ares sibling. At the same design and build quality, the Midas would look much better than the Ares because 6x erector ratios are much, much harder to do well than 4x ones. That is not the case however. The Ares is optically better is almost all respects. It has a truly excellent performance at the price the Ares puts out and merely adequate performance at the  price the Midas provides. It’s hard to feel really good about the Midas optical performance next to the Ares.

Doing the mechanical testing on the Athlon Midas TAC 6-24x50mm

Mechanical Testing and Turret Discussion:

            As mentioned in the unboxing section, the Athlon Midas TAC 6-24x50mm sports a very feature rich 10 mil per turn zero stop elevation knob where the zero stop and zero are set independently, allowing you to set whatever amount of turn below the zero before the stop that you desire. The windage knob is also 10 mils per turn, though with a lower profile and capped construction. It also lacks a stop and is marked out to 5 mil left and right instead of continuously. Testing the accuracy of these adjustments was done in accordance with the methodology section detailed at the end of this review. This methodology was followed on all the scopes this year and has been in use for a few years now.

In testing, the adjustments tracked monotonously perfectly in all respects. The scope adjusted up from optical center 14.3 mils with no deviation and then perfectly down 8.5 mils. This is not the full range of travel down but rather the travel with the zero stop flush to the center post. There is a little room internally for the zero stop to protrude above the post with no problems. I show a maximum of 12.4 mils down on my example in this configuration though I did not test the tracking out to that point. You could also remove the stop feature entirely and get even more travel. I show a max of 14.1 mils on my example. These numbers would suggest a 30 MOA base should not cause a problem and that some users might be able to do a 40 MOA and still have a 100yd zero though that will depend a lot on the rifle since there is variance in all rifles between the centerline of the rail and that of the bore.

Tracking on both adjustments was repeatable and the scope returned to zero with no problems. The windage and elevation were also properly independent. No zero shift was affected by power change, parallax change, or diopter change.

You don’t get any better than zero deviation so a big win for that. Getting adjustments to exactly match the correct magnitude is one of the most difficult aspects of scope manufacture. As such, most scopes show deviation to some degree measurable with my equipment. The average deviation for precision rifle scopes, based on my past tests, is about 1%.

Athlon Ares BTR 4.5-27x50mm on Mesa Precison Arms Crux rifle (front) with Midas TAC 6-24x50mm on Kelbly Atlas tactical (rear)

Summary and Conclusion:

            The Athlon Midas TAC 6-24x50mm is a lot of scope with a lot of features for the $630 street that it goes for. The thing is that its sister scope, the Athlon Ares BTR 4.5-27x50mm is a even more scope at its $850 price. This is emotionally hard for me. I understand intellectually that the 35% more that the Ares costs is a very meaningful difference and that the Midas might itself be a budget stretch that represents a new world of possibilities since you are talking about a scope with real long range capabilities in a price range otherwise full of set and forget limited range scopes. The Midas will mean that previously inaccessible game at 400yds is very doable. That could be quality meat for months for a family. It is hard, as an optics geek (even a not so well-heeled optics geek) to connect with that though. It is much easier for me to be really impressed that Athlon managed to get better optical performance out of the Ares while also cramming in a 6x magnification ratio and landing it at the very low price of $850. Sure, a 6x vs 4x magnification ratio might not really translate into much more utility for you, the added illumination on the Ares is no more utility to almost anybody, and the Ares is only a little optically better, but aren’t you moved by how much more lit up some optics geek got about it?

Here is Your Pro and Con Breakdown:

Pros:
– Optics are significantly better than average at the price
– Tracked perfectly
– Very low price for a full featured FFP Mil/Mil zero stop scope
– Properly sized reticle with very little cant
– Very simple effective zero stop that lets you chose travel below zero
– Lightweight, 26.3oz
– Smaller 50mm objective I prefer
– Full 10 mil/turn knobs
– Good adjustment range, 25mil
– Reticle design in line with current trends
– Good warranty

Cons:
– It’s hard not to recommend its sister scope, the Ares BTR 4.5-27×50, over it for better optics, more features, and a much larger 6x magnification ratio
– Basically no extras like scope caps, sunshade, or bra
– Athlon is a new company with a good, though very short, track record
– Manual has some advice that may lead a novice astray

Athlon Midas TAC 6-24x50mm in on a Mesa Precision Arms Crux Rifle

Testing Methodology:  Adjustments, Reticle Size, Reticle Cant

            When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. Approximately .2 or so mil of deviation is allowed from center in the erector as it is difficult to do better than this because the adjustable V-block has some play in it. The erector can be centered with the scope mounted or not mounted. If it started unmounted, I mount it after centering. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the test rig.

Mechanical testing apparatus and target

            The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at an 8’x3′ Horus CATS 280F target 100 yds downrange as measured by a quality fiberglass tape measure. The target is also trued to vertical with a bubble level. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.

            The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside-down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. (Since up for bullet impact means down for reticle movement on the target, the inversion is necessary.) With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise test platform. These bolts allow the scope to be precisely positioned such that its reticle is perfectly aligned with the test target prior to moving the adjustments. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test:  it’s quite a lot of fun if you are a bit of a nerd like I am! After properly setting the parallax to the target (head bob method) and diopter (after the parallax), I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. At the extent of this travel I can also determine the cant of the reticle by measuring how far off of the target centerline the reticle has moved. I next reverse the adjustment process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this manner, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. The elevation and windage are then tested in conjunction with one another by making a large box 8 mil wide and as tall as the adjustments will allow. If the scope is one where it is easy to do so (not a pin type zero stop model), I next re-align the test rig to point the scope at the bottom of the target and test the elevation in the other direction for tracking and range. After concluding the testing of adjustments, I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target.

            Testing a single scope of a given model from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots took years to make it to market. Tolerances are a particular concern for scopes that have high magnification ratios and also for those that are short in length. Both of these design attributes tend to make assembly very touchy. This should make you, the buyer, particularly careful to test purchased scopes that have these desirable attributes, as manufacturers will face greater pressure on these types to allow looser standards. If you test your scope and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it:  squeaky wheel gets the oil and all that. Remember that some deviations, say a scope’s adjustments being 1% too large or small, are easy to adjust for in ballistic software, whereas others, a large reticle cant for instance, are not.

            Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center for longer. The average deviation for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in objective lens position affecting both the same. In scopes that have had a reticle with error, it has been of this variety, but fewer scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary in magnitude as you move from erector center although adjustment deviation often does. The mean amount of reticle error is less than .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, affects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21 cm at 1000 meters with a 168 gr .308 load that drops 12.1 mil at that distance. That is a lot of drop, and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope:  a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition. Lastly, the proliferation of “humbler” type testing units such as mine appears to have resulted in scope companies improving their QC standards. I see less deviation in products now then a few years ago.

Testing Methodology:  Comparative Optical Evaluation

            The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose, primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained.  Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire nor the resources to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end no agreement will be reached on the relative weights of different factors in analysis.

            The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users’ experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and will be aware of the reasons for that impression.

            The central technique utilized for this testing is comparative observation. One of the test heads designed for my humbler apparatus consists of five V-blocks of which four are adjustable.  This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing, each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester, the adjustments centered optically, and the parallax set. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. Specific notes are made regarding:  resolution, color rendition, contrast, field of view, edge to edge quality, light transmission, pincushion and barrel distortion, chromatic aberration, tunneling, depth of field, eyebox, stray light handling, and optical flare. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optic’s performance and explains the reasons why.

Comparative optical testing of this years sub $1k precision rifle scopes behind the adjustable v-block

Feb 232019
 

Les (Jim) Fischer
BigJimFish
Written:  Nov 14, 2018

Athlon Ares BTR 4.5-27x50mm on Mesa Precison Arms Crux

Table of Contents:
-Background
– Unboxing and Physical Description
– Reticle
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
– Testing Methodology:  Adjustments, Reticle Size, Reticle Cant
– Testing Methodology:  Comparative Optical Evaluation

Background:

            Athlon is one of the newest players in the sport optics industry and it turned some heads a few years ago as it seemed to be born, fully formed, with a complete line of scopes at a wide variety of price points. This is because, in some respects, the apple doesn’t fall far from the tree, or runner, depending on your metaphor (I know, stretching it). Athlon was founded by (and is still run by) some Bushnell alums. As such, Athlon had the experience and contacts of a major market player at its start. Its business model also essentially differs little from that of its parent. They are both importers and brands – not manufacturers. As with most importers, they offer a broad selection of product lines and price points sourced from a variety of OEMs.

Where Athlon departs from many of the importers, or at least from its parent, is that it is smaller, more nimble, and flatter in terms of corporate organization. The principal players of Athlon are on the floor at tradeshows talking to customers, industry players, and grumpy writers. This is not really a large or small company thing so much as a philosophic thing. Huge companies, like Kahr or Benchmade for instance, can, and do, have founders on the floor. Much smaller and more bureaucratic companies often do not. This shows in the timeliness of the features Athlon puts in scopes:  they have their ear to the ground. My take is that the plan is basically to win on three things:  cost, service, and up-to-date feature sets. So far they appear to be mostly delivering on these points. Athlon scopes are generally less costly than other brands coming from the same OEM, they seem to be building a solid reputation for customer service, and their features are up to date with market trends.

Unboxing and Physical Description:

            Unboxing the Athlon Ares BTR 4.5-27x50mm reveals it to be pretty sparse on the accessories:  no caps or covers, just a lens cloth, battery, manual, and invitation for the customer to review the product online. I think I’ll do that.

Athlon Ares BTR 4.5-27x50mm unboxing

The scope itself is on the smaller side for this magnification range. It features a mid size 50mm objective and is 13.8″ and long, 27.3oz. I am a fan of smaller objective, lighter weight optics. I have the general opinion that traditional objective sizes have never adjusted to the incredible light transmission gains that current generation lens coatings have made possible and this has left many scope makers manufacturing huge, heavy, scopes that gain little additional low light capabilities for all that added weight.

The adjustments on the Ares have a pretty good feel. The power ring and euro style diopter are on the looser side with the parallax and illumination on the stiffer. As it comes, the elevation and windage knobs are on the mushier side of average with clicks that are tactile but not audible (as if any clicks would really be audible at a shooting range with hearing protection on). However, the adjustments have an o-ring that is user removable and unnecessary for sealing the scope but which can be removed to change the adjustment feel. Removing this results in an adjustment that takes less force to move but has clicks that are audible and feel more defined. Many prefer this feel. The elevation knob is 10 mil per turn and features both a zero stop system that is a little different from what I have seen before and markings that can be repositioned. Repositioning the markings is done in a common way. They are located on an outer sleeve that pops off after removal of a screw. This sleeve is toothed with enough teeth that its markings will properly line up with the actual detents instead of landing between as some others have done. The zero stop system is not one I have seen before and I am embarrassed to admit that I did not even notice that the scope had one until halfway through the review. As is common, the whole elevation knob on the Ares screws up and down as the adjustment it rotated. This attribute forms the basis of both the zero stop and the simple scribed turn indicator. The zero stop consists of a brass disc they refer to as the “zero stop locking plate” located under the removable outer adjustment sleeve. This disc can be repositioned using three set screws. So, basically, you zero the scope, remove the outer sheath, loosen the set screws, and move the disc so that it is lying flat on the saddle with its stop protrusion immediately to the right of the stop protrusion on the scope saddle. You then gently tighten the set screws and replace the sleeve and its screw with the proper alignment of the zero. This zero stop is very inexpensive to make in addition to being quite functional. It also has the same advantage as most plunger style systems in that you can set it independently of the markings to give you a few tenths of adjustment below the zero if you want. It is a well designed system and I’m a fan.

The Mil-stop system used on the Athlon Ares BTR and Midas TAC scopes

The manual included with the Athlon scope is a mixed bag. It includes pretty good sections on focusing, setting eye relief, bore sighting, and zeroing (although it mistakenly refers to the “zero stop locking plate” as black when it is actually brass) and a lesser section on mounting. It also has some nice dimensioned diagrams of each reticle in the line. The section on troubleshooting tips for accuracy is problematic, however, as it has some poor enough advice in it that I am writing about it. The section advises the shooter to “use a bench rest or sandbag to support the barrel and stock”. Force on the barrel deflects the barrel, causing shots to stray and should be avoided – not encouraged – when seeking to shoot with accuracy. Support of the barrel with sandbags is actually often the cause of inaccuracy and not a solution for it. The manual also says to make sure there is “no excessive grease inside of the barrel”. This suggests to me that there might be a good reason to have a proper amount of grease in the barrel and a novice shooter might then, in error, apply grease to such. Though grease is sometimes used in a barrel for long term storage, there should never be any grease in a barrel when you are shooting. Grease in a barrel can not only cause inaccuracies, but can also cause dangerous and/or unbalanced pressures in a barrel. Grease does not protect a barrel from wear either, as wear is overwhelmingly a product of erosion in the throat of a barrel from powder burning there and not a product of friction with the bullet over the length of the barrel.

Reticle:

            Unsurprisingly, for a scope designed to be cost conscious, there is only one mil and one MOA reticle. They call both of these APLR3 reticles and they are similarly styled. The mil one used in this review has .2mil graduations pretty much all around including a cleverly done integration of a floating center crosshair to that scheme. Floating centers seem to be the current trend. The reticle is on the finer side of average, which I like, and has a substantial Christmas tree section with .2 mil graduations left and right every 1 mil of elevation that is a bit busier than I prefer. The biggest thing I am having trouble coming to terms with on the reticle is the ticks between each mil being only up for the first two and down for the second two with the actual mil divisions being both up and down. I get the logic but am not finding it quite as fast and intuitive in practice as I would like. All that said, it certainly has the features and advances of the most popular reticles today and I expect the reticle to be well-liked – at least as well-liked as anything can be when it comes to the personal taste that comes into play when talking about reticles. A tremendous amount of how much a reticle is liked by a shooter comes down to what that shooter is already comfortable with and therefore finds intuitive rather than any better or worse practices and all of the things I am picking about with the APLR3 are in that former ‘taste’ category. Most importantly, when tested, the reticle showed no deviation in size from the correct dimensions and also showed no cant relative to the adjustments.

Horus CATS 280F test target through Athlon Ares BTR 4.5-27×50 FFP IR Mil

Comparative Optical Evaluation:

            For optical comparisons to this Athlon Ares BTR 4.5-27×50 FFP IR Mil, I had the other scopes in this series of sub $1K FFP mil/mil precision rifle scope review, the Athlon Midas TAC 6-24×50 mm and Sightron SIIISS624x50LRFFP/MH, as well as two that have been used as comparisons by me in previous reviews for context, the Leupold Mk 6 3-18×44 and my old (and now discontinued) Zeiss conquest 4.5-14×44. All of these scopes were lined up together on a five slot adjustable v-block and evaluated using the procedure outlined in the methodology section at the end of this review. This same methodology is used on all long range scope evaluations and has been for several years now.

I have never before had a set of five scopes with such generally close optical performance. Usually, scopes somewhat sort themselves into performance tiers with higher tier scopes being better than lower tier scopes in pretty much all characteristics. That was not even remotely the case with this lineup. No scope was always first or last when evaluating particular performance parameters and the order of the scopes’ rankings changed with pretty much every particular parameter being evaluated. That being said, the Ares was, on balance, on the better side of average for the group. Its best showings were resolution, low light performance, and eyebox, where it was second best in the lineup to two different scopes. Weak points were in distortion, where it had noticeable barrel distortion, and field of view, where it was probably the narrowest, but at least certainly on the narrow side, though this is difficult to be sure of as power ring markings are not actually calibrated. Other aspects where the scope scored mid pack, such as stray light handling, contrast, depth of field, and chromatic aberration were better than I expected at the price and should not be of issue to any shooters. Edge to edge clarity was excellent on all the scopes tested and no scope displayed any tunneling.

            Overall I found the Ares performance quite satisfactory – much better than I expected at the price and far better than I saw a couple years ago in significantly more expensive and less feature rich competitors. The Ares, and in fact all of the sub $1K scopes in this lineup, land solidly in what I consider the mid-range performance tier that I formerly most associated with $1.5-2K price range optics. It should be noted here that the Athlon Ares BTR 4.5-27×50 has a massive 6x erector ratio. It shares this ratio with only the many times more costly Leupold Mark 6 in this group. The other scopes had 4x and 3.1x ratio erectors. That large erector ratio significantly complicates design and makes optical performance more difficult to obtain. It is also not something you expect to see in a sub $1K optic. This sort of erector range is usually associated with scopes $2K and up. To see it executed this well in scope of this price is surprising. I will admit to having the false expectation before starting the review of this scope having a lot of optical difficulties and finishing near the bottom of the group instead of on the better half of average.

Mechanical Testing and Turret Discussion:

            As mentioned in the unboxing section, the Athlon Ares BTR 4.5-27×50 FFP sports a very feature rich 10 mil per turn zero stop elevation knob where the zero stop and zero are set independently allowing you to set whatever amount of turn below the zero before the stop that you desire. The elevation knob is also 10 mils per turn and very similarly constructed but lacks a stop and is marked in L and R instead of continuously. Testing the accuracy of these adjustments was done in accordance with the methodology section detailed at the end of this review. This methodology was followed on all the scopes this year and has been in use for a few years now.

In testing, the adjustments deviated from the proper magnitude in the following ways and degrees:

Adjusting impact up from optical center, the scope slowly lost a little for a little while.

At 5.9 mils on target it reads 6.0 on the adjustments.

From that point on, it was constant – adjusting to a maximum of 12 mils on target reading 12.1 mils on the knob.

Adjusting down from optical center, the scope showed no deviation 8.5 mils down to my zero stop with about 5 mils beyond that for a total elevation travel that is probably a little more than the stated spec of 22.2 mils.

Similarly excellent, the windage tracked cleanly out to 4 mils each way.

Tracking on both adjustments was repeatable and the scope returned to zero with no problems. The windage and elevation were also properly independent. No zero shift was affected by power change, parallax change, or diopter change.

I was quite pleased with the almost negligible deviation in adjustment magnitude shown by this scope in testing. Getting adjustments to exactly match the correct magnitude is one of the most difficult aspects of scope manufacture. As such, most scopes show deviation to some degree measurable with my equipment. The average deviation for high-end scopes, based on my past tests, is about 1%. On average, this scope was better than that over its total range and at no point was off by more than one click. You really can’t ask for better than that.

Doing the mechanical testing on the Athlon Ares BTR 4.5-27×50 FFP IR Mil

Summary and Conclusion:

            What would you pay for a mil/mil ffp scope with 10 mil per turn turrets, illumination, a solid lifetime warranty, 6x erector ratio, zero stop, and good glass? I can tell you what you wouldn’t have paid a few years ago was the $850 street the Ares goes for. You would have paid at least twice that. I can’t tell you in the long run how this scope will weather. Athlon is basically a brand new brand so its track record, though good, is very short, but I can tell you that my first experience with their products has been quite good. Both of their scopes exceeded my expectations optically, mechanically, and with regards to feature set. I should also mention, since it is coming close to Christmas and it is unlikely that the Midas TAC review will be posted before then, that the performance was very close between the two models (the Ares optics are a little better but not much) and given that, and some similarities in manufacture, I expect both models come from the same Chinese OEM. So, your preview on that review is that those looking to choose between the two are debating price for features as the quality appears quite close.

In the larger picture, the takeaway I am having from these sub $1k ffp mil/mil reviews is that you can get a whole lot more now at this limited budget than you could before. In fact, I think these lower cost options are going to start to lure a lot of folks from higher price tiers who are willing to give up a little optical clarity, field of view, and sometimes power range, for a scope that is a lot lighter and a lot cheaper.

Here is Your Pro and Con Breakdown:

Pros:
-Optics are good, better than I expect at the price
-Tracked very well with no zero shifts
-Properly sized reticle with no cant
– Very simple effective zero stop that lets you chose travel below zero
– Big 6x erector ratio usually only seen on much more expensive optics
– Lightweight, 27.3oz
– Smaller 50mm objective I prefer
– Full 10 mil/turn knobs
– Good adjustment range, 22.2mil
– Illumination
– Reticle design in line with current trends
– Good warranty

Cons:
– Field of view on the small side
– Basically no extras like scope caps, sunshade, or bra
– Athlon is a new company with a good, though very short, track record
– Manual has some advice that may lead a novice astray

Athlon Ares BTR 4.5-27x50mm in Bobro mount on a Kebly’s Atlas Tactical rifle

Testing Methodology:  Adjustments, Reticle Size, Reticle Cant

            When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. Approximately .2 or so mil of deviation is allowed from center in the erector as it is difficult to do better than this because the adjustable V-block has some play in it. The erector can be centered with the scope mounted or not mounted. If it started unmounted, I mount it after centering. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the test rig.

Mechanical testing apparatus and target

            The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at an 8’x3′ Horus CATS 280F target 100 yds downrange as measured by a quality fiberglass tape measure. The target is also trued to vertical with a bubble level. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.

            The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside-down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. (Since up for bullet impact means down for reticle movement on the target, the inversion is necessary.) With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise test platform. These bolts allow the scope to be precisely positioned such that its reticle is perfectly aligned with the test target prior to moving the adjustments. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test:  it’s quite a lot of fun if you are a bit of a nerd like I am! After properly setting the parallax to the target (head bob method) and diopter (after the parallax), I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. At the extent of this travel I can also determine the cant of the reticle by measuring how far off of the target centerline the reticle has moved. I next reverse the adjustment process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this manner, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. The elevation and windage are then tested in conjunction with one another by making a large box 8 mil wide and as tall as the adjustments will allow. If the scope is one where it is easy to do so (not a pin type zero stop model), I next re-align the test rig to point the scope at the bottom of the target and test the elevation in the other direction for tracking and range. After concluding the testing of adjustments, I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target.

            Testing a single scope of a given model from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots took years to make it to market. Tolerances are a particular concern for scopes that have high magnification ratios and also for those that are short in length. Both of these design attributes tend to make assembly very touchy. This should make you, the buyer, particularly careful to test purchased scopes that have these desirable attributes, as manufacturers will face greater pressure on these types to allow looser standards. If you test your scope and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it:  squeaky wheel gets the oil and all that. Remember that some deviations, say a scope’s adjustments being 1% too large or small, are easy to adjust for in ballistic software, whereas others, a large reticle cant for instance, are not.

            Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center for longer. The average deviation for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in objective lens position affecting both the same. In scopes that have had a reticle with error, it has been of this variety, but fewer scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary in magnitude as you move from erector center although adjustment deviation often does. The mean amount of reticle error is less than .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, affects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21 cm at 1000 meters with a 168 gr .308 load that drops 12.1 mil at that distance. That is a lot of drop, and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope:  a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition. Lastly, the proliferation of “humbler” type testing units such as mine appears to have resulted in scope companies improving their QC standards. I see less deviation in products now then a few years ago.

Testing Methodology:  Comparative Optical Evaluation

            The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose, primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained.  Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire nor the resources to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end no agreement will be reached on the relative weights of different factors in analysis.

            The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users’ experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and will be aware of the reasons for that impression.

            The central technique utilized for this testing is comparative observation. One of the test heads designed for my humbler apparatus consists of five V-blocks of which four are adjustable.  This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing, each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester, the adjustments centered optically, and the parallax set. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. Specific notes are made regarding:  resolution, color rendition, contrast, field of view, edge to edge quality, light transmission, pincushion and barrel distortion, chromatic aberration, tunneling, depth of field, eyebox, stray light handling, and optical flare. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optic’s performance and explains the reasons why.

Comparative optical testing of this years sub $1k precision rifle scopes behind the adjustable v-block

Feb 212019
 

Les (Jim) Fischer
BigJimFish
Written: Sept 9, 2018

Sightron SIIISS624x50LRFFP/MH on a Mesa Precision arms Crux with Bobro LabX rings

Table of Contents:

– Background
– Unboxing and Physical Description
– Reticle
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
-Upcoming Models and Changes
– Testing Methodology:  Adjustments, Reticle Size, Reticle Cant
-Testing Methodology:  Comparative Optical Evaluation

Background:

            Sightron is best known in the target shooting community for producing solid no frills scopes at prices lower than comparable competitors. Sightron has always appeared to be low on advertising expenditures and behind the curve on features, but their quality, price, and customer service has been good.

A few years ago I spent a long time at the Sightron booth talking to one of their reps about the features necessary for precision rifle shooting, price points, where I saw gaps in the market, and why none of their current products did what I wanted them to do. It felt like a productive conversation. I found out some useful information about what they thought it would cost them to do this or that and I hope they found it useful as well. I remember specifically focusing on mid range FFP mil/mil stuff around $1k and low cost 2FP mil/mil stuff. They have since added the mid range FFP mil/mil scope in this review, a higher cost FFP mil/mil ED offering, and will soon be doing some lower cost FFP mil/mil stuff as well. Maybe I even had something to do with this. Either way, they now have some offerings that I find interesting.

Unboxing and Physical Description:

            The Sightron SIIISS624x50LRFFP/MH is a little surprising when it comes to the extras in the box. Instead of the near ubiquitous plastic scope caps, it includes a sunshade, scopecote, and lens cloth. It is both different than what I expected and also a bit more. As far as documentation goes, the SIII includes a manual that appears to be generic to all Sightron scopes and an insert specific to this model which has a dimensioned diagram of the reticle and some mil equations. I was able to give myself a nice big pat on the back for thoroughness by finding the mil equations in the insert and one of the tables in the booklet to be in error. I’ll bet it was a fun time in the office when that memo came rolling through. I suspect I was the only party involved who enjoyed himself. A party of one is still a party though, right? In any case, the equation in the insert is fixed now and hopefully the table in the manual is being updated as well.

The scope itself looks and feels very clean. It is a Japanese produced product and has the top shelf fit and finish you generally see in products coming out of Japan. The knobs are just 5 mils per turn instead of the more common 10 mils, but feel very good. They are neither squishy nor those ones that are so stiff its difficult to click just one increment . Similarly, the stiffness on the parallax and fast focus eye piece are also quite pleasing. The styling is sort of hunter with just a slight nod to tactical in the knurled and uncapped adjustments. The optic is relatively long at 14.96″, narrow with a 50mm objective, and light at 23.8 oz. These factors, along with the 4x erector ratio and the simple objective lens group I have observed indicate a pretty conservative optical design. This is not surprising given how economical this optic is relative to others with similar optical and production quality. It should also work out very well when it comes to the optical performance later on in the testing. In keeping with the simple theme, the model is un-illuminated and has no zero stop or lock on the adjustments, though the zero can be reset.

The markings on the knobs must have been something of a debate within Sightron because, in addition the included knobs, there are a few you can buy aftermarket with better markings. The included knobs are marked in a not so contrasty gold script every 5 clicks with, unaccountably, the number of clicks rather than any actual angular dimension. You can partially remedy this with a gold sharpie by adding a decimal point to make the clicks mils instead. The properly done what they call “tactical knobs” use a higher visibility white script and are labeled in mils every .5 mils. The elevation on those knobs also has additional lines for other turns and the windage is marked to 2.5 mils R or L. I believe this scope will fit the 74007 knob that has markings up to 15 mils as well as the listed 74006 one which has markings only to 10 mils. With a 20MOA base, this scope may actually have slightly more than 15mil of travel depending on your rifle’s zero. Evidently, the argument about which knobs to include in the box was won by someone who has never shot at distances requiring significant drop compensation – perhaps it was our trigonomically challenged manual writer. Either way, the properly marked knobs are $50 each so… probably sharpie.

Sightron SIIISS624x50LRFFP/MH Unboxing

Reticle:

            There are not a lot of options in general around Sightron’s SIII front focal plane line. Actually, there are exactly two, mil or MOA. Both the mil and MOA versions have a reticle matched to their adjustments and only that one reticle. In both cases these reticles are quite simple. The mil version has no labels, but features markings at 1 mil and .5 mil increments with .25 mil markings for the first mil and the target dot type floating center that seems to be the current trend for whatever reason. The reticle is itself on the fine side of spectrum in terms line thickness. I am, and have always been, in favor of fine reticles. I have found them to be more precise, a bit faster, and far more comfortable. Thick caterpillar reticles always give me the same feeling as a gnat flying around my head. I just want to swat them out of my view. Overall, there is really nothing special about the SIII’s mil reticle but also nothing particularly problematic. It is really a pretty good design for broad appeal. I don’t think anybody will hate it, but it is also not going to be anybody’s favorite and that is pretty much how I feel about it as well.

In testing, the reticle showed no deviation in size from the correct dimensions and also showed no can’t relative to the adjustments. So spot on with that.

Sightron SIIISS624x50LRFFP/MH reticle on an optical test pattern

Comparative Optical Evaluation:

            The Sightron SIII arrived earlier this year than any of the other test scopes and coincided perfectly with the first of the two review rifles for the year, the Kelbly Atlas Tactical and Mesa Precision Arms Crux. The timing was quite fortuitous as the SIII has a higher magnification than any of my personal scopes and also has a nice fine reticle. These two characteristics are of great importance for accuracy testing rifles. This gave me a great opportunity to have a lot of time behind the optic before any of the systematic optical and mechanical testing. I was quite pleased with the SIII’s performance during this rifle testing. In particular, it struck me as very good optically, resolving impacts with such alacrity that my estimations of group sizes while firing strings proved to be spot on.

For optical comparisons to this Sightron SIII, I had the other scopes in this series of sub $1k FFP mil/mil precision rifle scope review, the Athlon Midas TAC 6-24×50 mm, and Athlon Ares BTR 4.5-27×50 FFP IR Mil, as well as two that have been used as comparisons by me in previous reviews for context, the Leupold Mk 6 3-18×44 and my old (and now discontinued) Zeiss conquest 4.5-14×44. All of these scopes were lined up together on a 5 slot adjustable v-block and evaluated using the procedure outlined in the methodology section at the end of this review. This same methodology is used on all long range scope evaluations and has been for several years now.

I have never before had a set of 5 scopes with such generally close optical performance. Usually, scopes somewhat sort themselves into performance tiers with higher tier scopes being better than lower tier scopes in pretty much all characteristics. That was not even remotely the case with this lineup. No scope was always first or last when evaluating particular performance parameters and the order of the scopes rankings changed with pretty much every particular parameter being evaluated.  That being said, the Sightron SIII was, on balance, the best. It particularly excelled when it came to resolution, contrast, stray light handling, low light performance, and, unsurprisingly given its conservative design, depth of field. With regard to the eyebox, it was more middle of the pack, though it did not feel tight, cramped, or finicky, but rather seemed large enough. Similarly, it was close to average in chromatic aberration, though the field all performed well in this regard. The only parameter where the SIII’s performance was sub-average for the group was in field of view. I did not notice any particular favor given to this or that end of the spectrum by the SIII such as is the way some scopes tend to favor greens or reds. Instead, the SIII seemed pretty balanced.

To some extent, I expected the SIII to have the best overall optical performance. It fits with the narrative of few features and high quality that I was expecting by reputation. To me, the SIII is what you get when you set out to see just how low cost you can make something of a high standard and still have what you need to shoot long range. You use a simple optical design and turrets, drop some features like illumination, and streamline the distribution by removing virtually all the options, variations, and most of the marketing budget. What you would expect would be excellent performance for the price and that is what the SIII delivers optically.

Mechanical Testing and Turret Discussion:

            As mentioned in the unboxing section, the SIII features simple 5 mil per turn adjustment knobs with no zero stop or turn indicator and a less than ideal marking scheme, but with great feel. The zero setting on these knobs is done with a single torx screw on top. Testing the accuracy of these adjustments was done in accordance with the methodology section detailed at the end of this review. This methodology was followed on all the scopes this year and has been in use for a few years now.

In testing, the adjustments deviated in the following ways and degrees:

Adjusting impact up from optical center, they were accurate to 4.0 mils.

At 5.0 mils on target the adjustments read 4.9 mils.

At 7.0 mils on target the adjustments read 6.8 mils.

At 10.0 mils on target the adjustments read 9.7 mils.

The scope adjusts up to 12.7 mils on the target, at which point the adjustments are at 12.4 mils. There was no continued movement of the adjustments after travel movement of the reticle stopped.

Adjusting down from center the scope was accurate to 2 mils.

At 3 mils on the target the adjustments read 2.9 mils.

At 7.0 mils on target the adjustments read 6.8 mils.

At 10.0 mils on target the adjustments read 9.7 mils.

At 14.0 mils on target the adjustments read 13.6 mils.

The scope adjusts down to 15.0 mils on the target at which point the adjustments are at 14.7 mils. There was no continued movement of the adjustments after travel movement of the reticle stopped.

The windage varied similarly to elevation measuring 4.0 mils on the target at 3.9 mils on the adjustments each way.

This tracking was repeatable and it returned to zero with no problems. Tracking on windage and elevation was properly independent. No zero shift was affected by power change, parallax change, or diopter change. For those wondering, it is not unusual to have more adjustment on one side of optical center than on the other. Though the tube will have the same amount of room on both sides of center, other factors, such as the return spring or turret housing, often limit travel in one or the other direction.

Getting adjustments to exactly match the correct magnitude is one of the most difficult aspects of scope manufacture. As such, most scopes show deviation to some degree measurable with my equipment. The average deviation, based on my past tests, is about 1% at 10 mils. The SIII was a good bit above this, deviating 3% at 10 mils. The effect of this on the shooter is that you need to correct your estimated Ballistic table for it. If you print tables from online calculators, such as I do, you can tailor each entry to reflect the scopes exact deviation at that point. In the case of smartphone type ballistic computing applications, some now include an input to allow for deviations, usually linear,  arising from the scope. Most of the time scopes do not deviate entirely linearly, however. Scopes usually deviate by more the further from center the adjustment moves, as is the case with this one. In the end, most app-based ballistic calculators often struggle both with this correction as well as with integrating actual data proven in the field with the calculator’s estimations. At some point I should probably do a whole review set on ballistic calculators, but that is not today’s project.

The takeaway from the mechanical testing is that the SIII tracks cleanly and repeatably, but has a bit more deviation than I would expect. This will mean more correction to ballistic tables than is typical. It should also be mentioned that each SIII example should not be expected to vary by the same amount with regard to the magnitude of the deviation in the adjustments and all scopes are like this. All scopes are typically designed to have no deviation but slight lens positioning differences from piece to piece result in deviation from the design specs.

Sightron SIIISS624x50LRFFP/MH during mechanical testing

Summary and Conclusion:

            Quite frankly, this year’s set of sub $1k ffp mil/mil precision optics has proven much better than I expected. A few years ago you really couldn’t get the features necessary to shoot long range in an optic for under $1.2k and I really didn’t like any of the options until almost $2k. I like this Sightron though and I like some of the other sub $1k options as well. Shooters now have some real viable and meaningful choices in long range precision optics for a budget. As for that budget, the street price on this Sightron varies significantly from outlet to outlet, but the shooter should find it for well below $1k.

I think the SIIISS624x50LRFFP/MH represents a low featured but well manufactured precision rifle scope. It is the best optically that I have tested and is significantly better than I would expect from a scope at that price. Similarly, I expect it too be pretty durable as the production line it comes from has a long history of durability and longevity. Sightron also has a good reputation for backing their products should anything go amiss. The SIII’s features are limited, though, and the adjustments are neither 10 mils per turn nor very well marked unless you spend the $50 for the aftermarket turrets. It is also not illuminated. Lastly, the 3% deviation in adjustments at 10 mils from the example I have would suggest that one element of keeping costs low is the allowance of wider tolerances in manufacture than average, specifically in the manual labor of lens positioning that accounts for a large amount of manufacturing cost. This does mean more correction of data by the end user.

All told, I am certain that the package Sightron has put together here will be found compelling by many and develop a significant following. I know I am quite taken with it.

Here is Your Pro and Con Breakdown:

Pros:
– Exceptional optics for the cost in almost every aspect
– Lightweight
– Nice extras with sunshade and scopecote
– Good adjustment range
– All the knobs feel good (sorry, pun not avoided)
– Reticle is acceptable and sized properly
– Tracks repeatably with no zero shifts
– Good warranty and reputation

Cons:
– Stripped features, 5mil/turn, no illumination, no turn indicator, low erector ratio
– Poorly labeled adjustments unless you buy aftermarket
– The tested example had more deviation than average on adjustments
– Manual is lacking and contains basic errors

Sightron SIIISS624x50LRFFP/MH in Bobro dual lever mount on Kelbly Atlas rifle

Upcoming Models and Changes:

            In the process of doing this review and communicating back and forth with Sightron, I learned of some upcoming features and models that I think will be of interest to readers of this review. As I mentioned before, not long ago Sightron didn’t make anything that was featured properly for long range shooting and this model represented something of a first step into that feature set. Evidently, it went well as Sightron is coming up with some more offerings in that direction as well as improving this and other existing models.

Specifically, Sightron is giving the SIII and SV zero stops and coming out with an even lower cost ($699 mspr) FFP mil/mil offering in the S-Tac line which will be a first in that price range to offer a zero stop. The zero stop on the SIII and S-Tac is of an entirely new design that is remarkably simple, inexpensive to manufacture, and also very flexible and easy to use. Simply put, it is a threaded collar that you just snug up under the elevation knob at the zero point preventing it from going lower in the way that cumbersome shim systems worked but without the cumbersome mess. Unlike pin type systems, this can also be set a couple tenths below zero if you want a little wiggle room. In that way, it is better than many more complex zero stop systems on scopes costing many thousands of dollars. It is also very easy to understand, so I don’t think anybody will get confused setting it as often has happened on other systems. In addition to the zero stops, Sightron will be adding more, and more refined, reticle choices to some of its models.

I expect to have additional details on all of this in this years ShotShow reporting. For now, here is a look at Sightron’s simple and effective zero stop concept.

Sightron’s new zero stop concept on an upcoming S-Tac mil/mil FFP scope

Testing Methodology:  Adjustments, Reticle Size, Reticle Cant

            When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. Approximately .2 or so mil of deviation is allowed from center in the erector as it is difficult to do better than this because the adjustable V-block has some play in it. The erector can be centered with the scope mounted or not mounted. If it started unmounted, I mount it after centering. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the test rig.

Mechanical testing apparatus and target

            The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at an 8’x3′ Horus CATS 280F target 100 yds downrange as measured by a quality fiberglass tape measure. The target is also trued to vertical with a bubble level. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.

            The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside-down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. (Since up for bullet impact means down for reticle movement on the target, the inversion is necessary.) With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise test platform. These bolts allow the scope to be precisely positioned such that its reticle is perfectly aligned with the test target prior to moving the adjustments. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test:  it’s quite a lot of fun if you are a bit of a nerd like I am! After properly setting the parallax to the target (head bob method) and diopter (after the parallax), I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. At the extent of this travel I can also determine the cant of the reticle by measuring how far off of the target centerline the reticle has moved. I next reverse the adjustment process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this manner, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. The elevation and windage are then tested in conjunction with one another by making a large box 8 mil wide and as tall as the adjustments will allow. If the scope is one where it is easy to do so (not a pin type zero stop model), I next re-align the test rig to point the scope at the bottom of the target and test the elevation in the other direction for tracking and range. After concluding the testing of adjustments, I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target.

            Testing a single scope of a given model from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots took years to make it to market. Tolerances are a particular concern for scopes that have high magnification ratios and also for those that are short in length. Both of these design attributes tend to make assembly very touchy. This should make you, the buyer, particularly careful to test purchased scopes that have these desirable attributes, as manufacturers will face greater pressure on these types to allow looser standards. If you test your scope and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it:  squeaky wheel gets the oil and all that. Remember that some deviations, say a scope’s adjustments being 1% too large or small, are easy to adjust for in ballistic software, whereas others, a large reticle cant for instance, are not.

            Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center for longer. The average deviation for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in objective lens position affecting both the same. In scopes that have had a reticle with error, it has been of this variety, but fewer scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary in magnitude as you move from erector center although adjustment deviation often does. The mean amount of reticle error is less than .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, affects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21 cm at 1000 meters with a 168 gr .308 load that drops 12.1 mil at that distance. That is a lot of drop, and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope:  a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition. Lastly, the proliferation of “humbler” type testing units such as mine appears to have resulted in scope companies improving their QC standards. I see less deviation in products now then a few years ago.

Testing Methodology:  Comparative Optical Evaluation

            The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose, primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained.  Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire nor the resources to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end no agreement will be reached on the relative weights of different factors in analysis.

            The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users’ experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and will be aware of the reasons for that impression.

            The central technique utilized for this testing is comparative observation. One of the test heads designed for my humbler apparatus consists of five V-blocks of which four are adjustable.  This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing, each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester, the adjustments centered optically, and the parallax set. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. Specific notes are made regarding:  resolution, color rendition, contrast, field of view, edge to edge quality, light transmission, pincushion and barrel distortion, chromatic aberration, tunneling, depth of field, eyebox, stray light handling, and optical flare. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optic’s performance and explains the reasons why.

Comparative optical testing of this years sub $1k precision rifle scopes behind the adjustable v-block

Jul 182016
 

Review of the U.S. Optics LR-17 3.2-17×44 Illuminated Optic

BigJimFish logo

Les (Jim) Fischer
BigJimFish

July 18, 2016

Table of Contents:
– Background
– Unboxing and Physical Description
– Reticle
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
– Testing Methodology:  Adjustments, reticle size, reticle cant
– Testing Methodology:  Comparative optical evaluation

 

U.S. Optics LR-17 .32-17x44mm in Bobro dual lever mount atop Remington 5R

U.S. Optics LR-17 .32-17x44mm in Bobro dual lever mount atop Remington 5R

 

Background:

Over the past number of years I have done quite a few reviews of U.S. Optics products. During most of those years, my primary long range scope was one or another U.S. Optics SN-3 3.2-17x44mm scope. This model has since been renamed the LR-17 in a much needed bid to make the USO product line, which had a number of very different designs under the SN-3 designation,  a bit less confusing. Also during that time, U.S. Optics modernized its production methods in order to gain ISO 9001 certification, changed from a totally custom maker to one with some standard models, began to offer it’s products via some retailers, and was purchased internally from the founder’s son by some of its employees. Probably the most important of these was the ISO 9001 certification because of what those changes brought to U.S. Optics. The previous organization of production focused completely on one-off customs was not very efficient. This inefficiency led to higher costs and more QC problems than was possible. Since the change, the greater efficiency has not only improved QC but allowed USO to actually lower prices on a number of models. I probably don’t need to tell you that nobody else has lowered prices on existing models. Do you remember what an S&B PMII 5-25x went for 5+ years ago? I do, and it wasn’t the $3.74k it goes for now. I actually had to add another $500 to this just from the time I started this review to when I finished it. The scope has now basically doubled in price over the years. We in the firearms industry have grown accustomed, in recent times, to increasing prices on existing products though S&B is really in a class of it’s own in magnitude. This general price increase is a byproduct of inflation, currency fluctuations, and most importantly, soaring demand from a series of panic buy events. It is decidedly not the norm for products produced in a capitalist economy to behave this way. The norm is the ever greater efficiency and cheaper prices you see on say flat screen TVs. This year I have seen the reality of this begin to come home for companies in the firearms industry as product stock is soaring and some, seeing the writing on the wall, have slashed prices. Perhaps USO was ahead of the curve in understanding this, or perhaps it is all internal numbers and has little to do with macroeconomics. In either case, USO has lowered prices and quite a few others will have to do so as well.

I often get asked by people what is new and better in optics and this review somewhat addresses that new is not always better. It has been my experience that many new designs, which rely much more heavily on computer simulations than older designs, could have used some more hands-on prototype testing. There are a lot of compromises in optical design that are difficult to quantify and, more and more, I seem to be encountering designs that are difficult to use due to some of the design choices. Of particular concern are problems with having the whole image focus substantially in the same location so that your eye does not have to move around behind the optic to get different parts of the image in focus. I did not see this issue much in the past, but it has become prevalent, particularly in physically short and high magnification multiplier designs. This review looks at a very old optical platform that is a less aggressive design in its physical dimensions than many new competitors but also more thoroughly tested and often better optical design.

 

Unboxing and Physical Description:

For years, USO has been famous for its plain crappy white box with U.S. Optics tape. It has even become something of a cult symbol for its total divergence from the industry trend and complete lack of marketing. It reminds me somewhat of the boxes that Nikkor lenses come in, which have remained unchanged since at least the 1980’s:  black and gold and stylistically obsolete. U.S. Optics has since updated this design to include a snazzy slipcover and more aesthetic end sticker, but has, I think wisely, elected to retain the core, original, classic, tapped white box. The example I am reviewing today was one of the first to bear the new LR-17 designation and, by a printer’s delay, predated this new slipcover as well as new manuals which are a glossy, bound, affair in contrast to the  previous corner-stapled printed loose sheets.

Inside the box whose plainness I am far too enamored with, you will find what I consider the usual adornments of a scope. There are factory marked caps, a manual, and the wrenches necessary for adjustment. In the case of a USO with an EREK knob, you will also get the cap with a hole in it for EREK adjustment.

 

U.S. Optics LR-17 3.2-17x44mm with box and accessories. New manuals and box sleeves were not yet ready at the time I obtained this review sample.

U.S. Optics LR-17 3.2-17x44mm with box and accessories. New manuals and box sleeves were not yet ready at the time I obtained this review sample.

The appearance of the LR-17 itself is unique. The T-Pal (turret parallax) feature makes for a long saddle section of the scope that, at 2.89″, does not accommodate many of the existing one piece mounts. There is no integration of features in this design so elevation, windage, illumination, and parallax are all separate knobs. The usual configuration is with illumination and windage one in front of the other on the right side, but configurations actually exist with left hand windage. The EREK knob itself is very low and very wide. This is a well loved feature of the design and the wide nature makes it easier to read and gives better feel while it remains low and unobtrusive. A joint will be noticed in the objective bell. It is unusual for a scope of this cost to have a multi-piece main tube, but USO does due to material length limitations of the lathes used. At 2.1 lbs and 16.5″, the LR-17 is about average for weight and a bit longer than most competing scopes. The 3.2-17x range comes out to a 5.3x erector ratio. This is still a little above average, but was unheard of when the design first came out.

 

Reticle:

The production LR-17 comes in seven reticles. Two of these are in IPHY. They are the PCMOA and MDMOA reticles. Five of the designs are mil. They are the Gen II XR, MPR, H-102, H-59, and, most popular, GAP design.  These designs represent only a piece of what was once the whole custom catalog, beyond which USO used to actually work with users to create new reticles (this was obviously not free and had substantial minimum orders, so don’t go bugging them about it). The result of this is that some old esoteric reticle designs such as “Jon Beanland” are floating around and some new designs, the Big Dog Steel reticle comes to mind, have been proposed. I mention all of this reticle strangeness because the existing mil reticle options are not what I would like to see. They really whittle down to basic or Horus in nature and it is my hope that at some point the offerings might be improved.

GAP reticle as used in many U.S. Optics models. No exotic dear were harmed for this magnificent photo.

GAP reticle as used in many U.S. Optics models. No exotic dear were harmed for this magnificent photo.

 

Comparative Optical Evaluation:

The USO 3.2-17x design, in one example or another, has been more tested than any other optical design by me. I have used it, with my Zeiss Conquest 4.5-14x, as reference scopes in virtually all of my reviews. This is probably much to the annoyance of many a scope manufacturer as both of these are very solid optical designs in terms either of cost per performance or absolute performance and both are also very old designs.

In my latest set of reviews, I sat a brand new LR-17 side by side with a Vortex Razor HDII 4.5-27×56, Nightforce SHV, Burris XTR II 4-20×50, Leupold MK6 3-18×44, and my trusty Zeiss Conquest 4.5-14×44. To learn more about the exact methodology of the testing, please refer to the testing methodology section at the conclusion of the article.

 

The comparison lineup from left to right- Vortex Razor HDII 4.5-27x56, Nightforce SHV 4-14x56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17x44, Leupold MK6 3-18x44, Zeiss Conquest 4.5-14x44* not pictured*

The comparison lineup from left to right- Vortex Razor HDII 4.5-27×56, Nightforce SHV 4-14×56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17×44, Leupold MK6 3-18×44, Zeiss Conquest 4.5-14×44* not pictured*

 

The LR-17 and Razor HDII were pretty clearly in a league of their own. In many ways, parsing the optical performance of the Vortex Razor HDII 4.5-27×56 vs. the USO LR-17 is splitting hairs. Both were quite exceptional and I doubt very much anyone will be unsatisfied with the optical performance of either. Some of what we are here to do though is split hairs, and since we can probably see those hairs though either of these two scopes, we had best commence – keeping in mind the difficulty of this as the slightest changes in lighting as cloud thickness changed (or whatnot) were enough to constantly make me change and reverse opinions about who had better resolution (USO), contrast (USO), or color rendition (Vortex). A more certain judgment is that the eyebox on the Vortex was more forgiving of head position than the USO and that its edges were better. Also certain is that Vortex suffered more image loss as adjustments were moved near max adjustment range and farther from optical center, though given the much greater range of the Vortex in adjustment vs. the USO, it would be unfair to fault it on this. It should be noted that this USO has the largest field of view for any high power scope I have tested, an especially impressive statistic given its exceptional edge-to-edge clarity.

In general, given the many hours of shooting and testing I have had behind LR-17 designs, I can say with confidence that they are very well balanced and comfortable optical platforms that do not lag in optics relative to the much newer optical designs with which they now compete. It was good fortune that the most recent scope I tested the LR-17 against was the Vortex Razor HDII 4.5-27x, as this is probably the hottest new scope on the market today. The LR-17 is right on par with the HDII in optical performance, though the HDII does have a more aggressive 6x erector ratio.

 

Mechanical Testing and Turret Discussion:

Here is where we talk about the EREK knob. This was one of the first knobs that could be used in a zero stop fashion. I say could be because the concept of a zero stop was not really a thing when it was designed. It just ended up being about to be used that way when people had a mind to or perhaps people got a mind to because it could be. It is really kind of hard to pin that down. The original intent of the design was to have a low elevation knob and yet still allow full vertical travel of the erector within the main tube. Because of this origin, the EREK, when used as a zero stop, is actually a little tricky to set up. Let’s talk about the parts of the knob. There is a sleeve with graduations that can easily be removed and which is held in place with either a cap with a hole or a solid cap, a knob that clicks when moved, and a plunger in the middle that can be adjusted with a hex wrench and does not click when moved on its own. You probably won’t have any problem figuring out the sleeve part. You can set it wherever you want with no effect on the point of aim. The other two parts are trickier. You would think that you could zero the scope, put the hex wrench in the center hole, and hold it stationary while turning the knob down to stop. This is not the case. Moving the outer knob while the plunger is stationary does move the impact point. That is the trick, both the plunger and the knob independently move the point of aim. To easily adjust the EREK for use as a zero stop, you therefore need another tool:  a magnetic bore sight. What you do is to zero the scope on target as you normally would. You then attach the bore sight to the barrel and make note of where on the grid of the bore sight your point of aim is. You can then bring the knob down to zero and use the plunger to return on the grid of the bore sight to your correct point of aim. It is a step, and a tool more complicated than most current zero stop designs, but it does work and, like most plunger based zero stop designs, it also allows you a choice of how far below zero the stop is set at. This is something many designs do not allow to be changed. I hope you find this explanation helpful, as setting the EREK knob as a zero stop has frustrated many shooters who did not understand that the plunger and knob both independently move point of aim. With the correct understanding and tools, the adjustment can be done with only minor inconvenience vs. newer designs.

The EREK knob itself has a very USO feel to the adjustment. That is to say that the clicks feel very positive but also very smooth. Moving up or down does have a different feel and sound, but both are pleasing to my ears. I am a fan of this feel as some other designs are so stiff that it is hard not to over adjust and they always feel like the thing’s going to break, while other designs are kind of sloppy with play within a click. The USO has positive clicks, but they are not very stiff and are quite smooth. Because of the large diameter nature of the knob, the clicks are also well spaced and easy to read. The knob on newer EREKs is 11mil per turn with no tactical turn indicator. The previous knob was 9 mil. I am not sure why USO chose 11mil as it makes 2nd turn use tricky. Though the 20.5 mil total travel in the LR-17 is less than most new scopes, it is still enough that, with an angled base, 2nd turn use is clearly possible. Obviously, the thought is that the 11mils will be all that is utilized. Perhaps that is fine, as few shooters will ever use more than 11mils and those shooters would presumably be interested enough in high travel to chose a design that excels at that.

Usually, with my adjustment testing, I am not able to supply any sort of sample size as I only have one scope on hand. With the LR-17, however, I have been able to test two, as well as an additional two USO 5-25x designs that may also offer insight.

The adjustments on the newest LR-17 I had on hand were .1 mil small at 10mils, reading 10 mils at 9.9 actually traveled and .2 mils small at the full 14 mils traveled from optical center to stop (this is obviously more than spec for travel, by the way.) The reticle was also 1% small so, to the shooter, there would be no disagreement between the reticle and adjustments out to beyond 10 mil. No deviation in windage was noticeable out to the 4 mils that I can measure, but, given the difficulty of getting the target squared horizontally with the shooter, there is not much to say about that. No shift in point of aim with power change was recorded and the reticle was canted less than .05% counter-clockwise.

In addition to that late 2013 scope, I tested a 2006 5-25x, a 2010 5-25x, and a 2011 3.2-27x. Their respective elevations registered:  .2 mill large at 7 mils (full range), perfect at 10 mils, and perfect at 10 mils. The fist two had correctly sized reticles and the third was small by .05%. None of these scopes had any problems with point of aim changing with power change. The 2006 5-25x notably also would not focus down to the 100yd spec, but would instead only go to maybe 130yd. That is more annoying than you would think.

This sample size gives us some insight into the range of range of accuracy in USO scopes. Only the oldest had what I would consider unacceptable deviation of 2% in adjustment magnitude. The middle two were pretty spot on and the new one deviated in both reticle size and adjustment magnitude by 1%. Errors that, due to consistency with each other, would be unlikely to be noticed by a shooter and, I expect, were probably caused by the same lens positioning as each other.

 

U.S. Optics LR-17 EREK elevation knob with outer sleeve removed.

U.S. Optics LR-17 EREK elevation knob with outer sleeve removed.

 

Summary and Conclusion:

The U.S. Optics 3.2-17x optical platform is now well over 10 years old, but as we can see, gives up nothing to new designs in optical performance. In fact, I would say it is still better than par in that regard, being very comfortable to be behind with exceptionally good clarity and field of view. It remains one of my overall favorite optical designs. In terms of features, this design was one of the first to offer what are currently considered the basics of a long range tactical scope with a zero stop feature, high revolution elevation knob, and matching accurate knobs with reticles. The execution of the elevation knob is starting to show its age as newer models are less confusing to the user, quicker and easier to set, and often offer additional features such as a pop-up turn indicator or lock. I would not complain if USO saw fit to update the design of the EREK knob.

The LR-17 should serve to remind us of a couple truths. Introducing new models is not the only way to improve your product. Improving manufacturing to allow for better QC and lower cost with an existing strong product is also a good way to improve your offerings. Newer is also not always better as anybody can tell you when it comes to the shooting sports in general. The LR-17 remains substantially better than most much newer competing designs and remains one of my favorite long range optics.

Here is Your Pro and Con Breakdown:

Pros:
Excellent optics
Comfortable for the eye to be behind
Particularly good field of view
Good feel to the adjustments
Excellent warranty and reputation for service

 
Cons:
EREK knob is less feature-laden and more difficult to adjust than many competitive offerings
Reticle designs are very average
Tracking on my sample was average not excellent
Large footprint

 

Testing Methodology:  Adjustments, Reticle Size, Reticle Cant

When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. About .2 or so mil of deviation is allowed from center in the erector, as it is difficult to do better than this because the adjustable V-block has some play in it. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the rig.

 

Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27x56
Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27×56

 

The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at a Horus CATS 280F target 100 yds down range as measured by a quality fiberglass tape measure. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.

 

Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18x44
Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18×44

 

The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. Since up for bullet impact means down for reticle movement on the target, the inversion is necessary. With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise, test platform. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test. It’s quite a lot of fun if you are a bit of a nerd like I am. After properly setting the parallax and diopter, I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. I then reverse the process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this way, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. After concluding the testing of adjustments I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target. Lastly, this test target has a reticle cant testing function (basically a giant protractor) that I utilize to test reticle cant. This involves the elevation test as described above, a note of how far the reticle deviates horizontally from center during this test, and a little math to calculate the angle described by that amount of horizontal deviation over that degree of vertical travel.

Testing a single scope of a given model, from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots I lusted over never made it to market. I can tell you from seeing the prototype that they were a good design, but they were also a ridiculously tight tolerance design. In the end, the average time of assembly was such that it did not make sense to bring them to market as they would cost more than it was believed the market would bear. This is a particular concern for scopes that have high magnification ratios and also those that are short in length. Both of these design attributes tend to make assembly very touchy in the tolerance department. This should make you, the buyer, particularly careful to test scopes purchased that have these desirable attributes as manufacturers will face greater pressure on this type of scope to allow looser standards. If you test yours and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it:  squeaky wheel gets the oil and all that.

Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center. The average deviation for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in lens position affecting both the same. In scopes that have had a reticle with error it has been of this variety, but less scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary as you move from erector center. The mean amount of reticle error is about .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, Affects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21cm at 1000 meters with a 168gr .308 load that drops 12.1 mils at that distance. That is a lot of drop and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope:  a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition.

 

Testing Methodology:  Comparative Optical Evaluation

The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained.  Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire, nor the resources, to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end, no agreement will be reached on the relative weights of different factors in analysis.

The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and the reasons for that.

The central technique utilized for this testing is comparative observation. One of the test heads designed for my testing apparatus consists of five V-blocks of which four are adjustable. This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optics performance and explains the reasons why.

 

A variety of optical test targets viewed through the Leupold Mark 6 3-18x44
A variety of optical test targets viewed through the Leupold Mark 6 3-18×44

 

May 212016
 

Review of the Minox ZP8 1-8x24mm Illuminated Optic

BigJimFish logo

Les (Jim) Fischer
BigJimFish

May 21, 2016

 

Table of Contents:
– Background
– Unboxing and Physical Description
– Reticle
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
– Close Quarters Speed Testing and Illumination
– Testing Methodology:  Adjustments, Reticle Size, Reticle Cant
– Testing Methodology:  Comparative Optical Evaluation
– Testing Methodology:  Close Quarters Speed

 

Background:

If any scope was a white whale to any writer, this is probably the scope and I am probably the disagreeable old salt. You see, the story of 1-8x scopes doesn’t start with the Leupold CQBSS, the first actually released, but rather with the S&B short dot 1-8x that probably never will be, in original 1fp reticle 2fp dot configuration.

I first heard of the S&B short dot 1-8x in 2010 when it debuted at SHOT Show. At that same show, Premier Reticles debuted a 1.1-8x. That second scope came about through a partnership between Premier Reticles and Optronika, a company formed when an engineering team left S&B to strike out on their own in 2008. At that time it looked to be a compelling race both in technical terms, as it was a race to make what I considered the first all-in-one daytime optic, and in human terms, as former teammates competing against each other always introduces another dynamic.

As often happens though, history would not become what people expected. S&B would kick around their design for years and eventually decide that the costs of production in original first focal plane with projected red dot configuration was too cost prohibitive. They would instead split the concept and produce two separate models disappointing many folks who were desirous of a first focal plane scope with a projected dot. Premier would scrap the 1.1-8x design and debut a 1-8x only to never produce either on a large scale (I believe a few 1.1-8x’s were sold to individuals who preordered and put money down). Premier Reticles would eventually be bought by an industry veteran, take up a more northern address, and become Tangent Theta. I would go on to review three 1-8x designs produced by other makers, but could not forget about those original two that seemed destined never to exist.

This summer, when watching Ilya’s SHOT 2015 notes, I learned that the Minox 1-8x which I had caught wind of was a product of the Optronika folks (now working with Minox) and actually existed. I would later find out that this Minox 1-8x was actually the refined descendant of the Premier 1-8x design that Oprtronika had retained the rights to. I was quite pleased that some product of this ill-fated contest had eventually born fruit and I obtained one for testing. Let us see if it has been worth the wait.

 

Unboxing and Physical Description:

Most of the scopes that I have reviewed in the past arrive on my doorstep new in wrapping. The exceptions to this have been three late gen prototypes, three designated test scopes, and the personal scope of one of Leupold’s marketing guys. Each exception to the typical shrink-wrapped shiny has tended to foster further exceptions. I have had prototypes without final lens coatings, with different than final markings, with perhaps more careful than normal assembly, and even custom shop produced scopes that should not have met the QC standards of the run of the mill. I have had designated test scopes that came with plastic insert lined rings to stay minty fresh and some that looked like they had already seen battle.

This particular exceptional scope came dressed only in unmarked Tenebraex caps in a brown shipping box with a separate package of Warne quick detach rings that I presume are a result of my request to provide rings with all test scopes as I otherwise frequently run short when several scopes are simultaneously on hand. I would later find, as is relatively common in testing, that this test scope was an early production model and a few changes have been made since then that the reader will find informative. As a result of customer feedback, maximum illumination brightness has also been increased. Additionally, the knurling on the turrets has been improved and their clicks have been made more tactile. Those buying the scope today will receive a product with these changes packaged with Tenebraex caps, a manual, spare battery, lens cloth, and hex wrench. I was able to download the manual online. It is a pleasingly technically detailed affair entirely devoid of marketing attempts to get me to buy the scope I have obviously already bought since I have the manual. The manual even goes so far as to chart the maximum possible parallax deviation for ranges from 0-1000 meters. I would characterize it as being in the European style, which is to say that it contains actual information where an American manual would have warnings and/or advertisements.

 

Minox ZP-8 1-8x with included Tenebraex flip caps

Minox ZP-8 1-8x with included Tenebraex flip caps

As for the optic itself, the ZP-8 1-8x24mm bears a great deal of resemblance the stillborn Premier 1-8x design that is it’s forbearer. The location of the saddle towards the front, illumination turret design, and angular eyepiece with locking diopter are all familiar. The greatest point of divergence from the old specification is the turret design. Both designs have 10 mil .1 mil per click single turn turrets that I have been informed are in response to a particular military requirement. The Premier version had the additional feature of a semi-toolless zero and zero stop reset whereas the Minox features a locking function that holds the turret at zero while engaged. Ergonomically, I like the built-in low profile cattail, but think that the turrets and illumination control are a bit large for their feature content. The illumination control is the size of others I have encountered that also include a parallax and the turrets are similar in size to turrets including second rotations and pop-up indicators for those additional rotations. Also, though the low profile cattail is a nice grip, it is rather alone in this pursuit. Though the power ring is quite large, it is otherwise almost entirely smooth where I find myself wanting for some grip. The positioning of the cattail is good, having about a 220 degree throw which starts exactly where the thumb of your right hand will be when you change the power from 8x to 1x with an overhand grip. However, the 220 or so degrees will probably be further than you typically rotate with one grip and when you re-grasp, things will line up less advantageously leaving you to wonder why, with all that potential space for it, there is no more grip. The appearance of this arrangement is nice but ultimately I prefer the entire power ring to be grippy.
Reticle:

Designing the reticle for a 1-8x ffp scope is a significant challenge since at 8x that reticle will appear eight times the size that it appears at 1x and only 1/8th of it will be visible. It is also a challenge because with an 8x erector system between the reticle and the user, you can expect that the reticle would have to be etched very finely indeed to appear at all crisp to the user. Etching this fine is certainly possible, as illustrated by the Leupold CQBSS, though I have seen other 1-8x scopes that had reticles with a very thick “hairy caterpillar” appearance so I expect such etching is neither common nor inexpensive. The review ZP8 I have has the MR10 reticle and it does not have a hairy appearance, but is still rather thick looking at 8x and lacks much in the way of design. It is basically a mil hash with dot center crossed with a German #4. There are only two line widths:   the hash, dot, and top post are one line width and the other three posts are thicker. Divisions are in full mils with an extra wide line for the 5th. Though it is not a very creative design, in testing I found the MR10 to not interfere with the speed of the optic as the posts were thin enough at 1x not to be distracting and at that point you are pretty much just going on the projected red dot, which is good. At high power, the lines were a bit thick for the kind of precision I like to have, but from a practical standpoint were fine. I just really like the experience of shooting with fine lines even though the actual difference in my performance is small.

In addition to the MR10 reticle I tested, the ZP8 also comes in an A8-D reticle and an MR10+ reticle. The A8-D is basically a duplex design. The MR10+ is the MR10 I had with the addition of a limited 1mil grid below the primary aiming point for the same purpose as a Christmas tree and rapid ranging feature in the 9:00 quadrant. I like both of these concepts, especially as its 1mil grid is very sparse. The rapid ranging feature is, like most such features, dependent on the user’s understanding of the markings and what they stand for. The markings, in this case, are based on 1 meter vertical or .5 meter horizontal target dimensions. My biggest criticisms of this particular feature in this particular design are as follows. First, it is one of the more difficult to intuit arrangements I have seen and the user may forget how to use it between the time the manual is read or training administered and its actual use. Second, its arrangement does not allow the easy off-label calculations that it would if all markings were paired with their exact halves such as 300 and 600 meters or divided in half as is sometimes the case on other designs. Finally, in my past experience, 1 meter has not proven to be a large enough stadia to easily range, though with 8x max magnification, the Minox may somewhat mitigate this. What I liked most about this rapid ranging feature was that, in addition to existing (I like rapid ranging features), it was very cleverly and efficiently laid out in terms of real estate used and therefore obscured very little. In addition to the additional ranging and distance compensation features over the MR10, the MR10+ also has double the thickness outer posts on the reticle. I was sad to hear of this as thick posts in all forms have consistently proven to be a speed killer in my close quarters testing despite the common belief that they will increase speed by being more noticeable at low power. Close quarters speed seems to have more to do with seeing the target than the aim point. Those who have become proficient at skeet shooting or sporting clays probably expect this, though most will find it counterintuitive. This leaves me with mixed expectations for the MR10+. It is a significant step up from the MR10 in terms of its ranging and distance compensation features, but I expect it may be slower due to the thicker outer posts.

 

MR10 reticle in the Minox ZP-8 1-8x scope as viewed through the scope at 8x during mechanical testing

MR10 reticle in the Minox ZP-8 1-8x scope as viewed through the scope at 8x during mechanical testing

 

Comparative Optical Evaluation:

It is a shame that I did not still possess any of the 1-8x scopes from my review a few years ago at the time I tested this optic. That being said, I did still have quite a few optics to compare it to. At the time of testing, I had 1-6x designs from Optisan and GRSC / Norden Performance, a 1/4x Elcan Specter DR, and a few high powered designs in the Leupold MK 6 3-18x and Zeiss Conquest 4.5-14x. While in no cases was this an apples to apples comparison, it was quite sufficient to give me a sense of where the ZP8 fits in the grand scheme of things as relates to optical performance.

My initial impressions as I started testing the ZP8 were not good. The eyebox at maximum power is quite critical, even for a 1-8x. This makes testing more difficult and less comfortable. You will note that the through-the-lens photo in the reticle section appears just a little off axis. It took a great deal of effort to get a picture that good and it is still well below my average quality for such photos. Adding to the difficulty of good eye position was an issue with stray light. Usually, the blacker the blacks and deeper the greens, the better scope will prove to be in most optical respects. Scopes that have a bleached out look suffer from stray light from internal reflections. Most scopes that are high on the stray light issues suffer from just about every other issue as well. Since these are amongst the first things I notice in an optic, it was not looking good for the ZP8.

With this sense of foreboding, I trudged on testing each of the many optical issues that I usually look for. I was surprised when the Minox showed no noticeable chromatic aberration and further surprised when the resolution proved to be good, better even than the Elcan. The field of view also seemed sufficient. It was not even the smallest in my group despite the much more difficult 1-8x design. The color rendition appeared to be fairly even and it rendered blue as blue instead of the common rendition as black. The barrel and pincushion distortions were also not of large magnitude as is common with many aggressive optical designs.

So, in the end, the ZP8 did not bomb out on the optical performance, though it did have a rocky start. With the stray light issue, I would certainly recommend a sunshade or ARD and, because of the small eyebox, it is not very comfortable to use at max power, but it does resolve very well and does not have any other major optical issues. Without trying to run the risk of comparing it directly to any of the 1-8x designs I have reviewed in the past, I would venture to say its overall performance is on the lower side of the middle of 1-8x designs. My initial impression was misleading regarding optical performance just as it would later be misleading in close quarters performance.

 

The comparison lineup from left to right- Leupold MK6 3-18x44, Zeiss Conquest 4.5-14x44, Optisan CX6 1-6x, Norden Performance GRSC CRS 1-6x, and Minox ZP8 1-8x, *Elcan Specter DR 1/4x  not pictured*

The comparison lineup from left to right- Leupold MK6 3-18×44, Zeiss Conquest 4.5-14×44, Optisan CX6 1-6x, Norden Performance GRSC CRS 1-6x, and Minox ZP8 1-8x, *Elcan Specter DR 1/4x not pictured*

Mechanical Testing and Turret Discussion:

The practical distinction of most 1-8x designs from 1-6x designs is usually that the former is equipped for use at long enough range for significant drop and drift compensation. A 1-6x design is typically just a 1-4x close quarters scope with a little more magnification, perhaps for observation or target identification. Most 1-8x designs are actually precision rifle scopes with large turrets, zero stops, and 10mils per turn precise click adjustments. The ZP8 does feature fairly large turrets with .1mil clicks, a zero stop, and a push-down pull-up lock at zero only. The only surprising feature of the elevation knob is the single turn limitation. I found this strange as the internal travel is far more than 10 mils and was later informed that the limitation is the result of a particular set of military requirements. The windage is also limited to half a turn, 5 mils each way though this is common among precision scopes. In the case of both adjustment knobs, zero is reset using three set screws, a common (though inelegant) solution. The feel of the adjustments is acceptable. They move with about average force with clicks that are lightly tactile though barely audible. I am not a real stickler on adjustment feel so I will call them fine. I have been told that the turret feel has been significantly improved since the production of the test scope I used so I do not anticipate that a prospective buyer would be displeased even if a bit more picky about turret feel than me.

In testing, I found the adjustment magnitude small by .2mils at 10mils, measuring 9.8mil traveled at 10mil on the scale. Windage out to 4mils appeared fine left and .1mil small right. I expect that the actual windage deviation is the same magnitude elevation, however windage deviation is much more difficult to test in practice as the target and elevation knobs have smaller travel ranges and, more importantly, it is difficult to square the target precisely to the shooter. Both adjustments returned to zero properly. My testing further found accurate reticle dimensions, no zero shift with power change, and no measurable reticle cant. In order to further test the elevation travel, I reset the zero stop at 10mils to measure the travel from 10-20 mils from optical center. The same .2mil @ 10 mil deviation was noted. Also noted was a significant diminution of optical clarity becoming very noticeable around 15mils from optical center. All scopes deteriorate optically as they diverge from optical center, so this was not abnormal or alarming in any way.

Overall, the magnitude of adjustment deviation in the ZP8 is greater than the average I have found in testing high powered precision scopes, though it is by no means the greatest. It would be very interesting to have tested all the other 1-8x scopes in this manner when I reviewed them a few years back, but I did not yet have the capability at that time. For those interested, this 2% deviation comes out to a little less 20cm at 800 meters for a .308 cartridge though, since the deviation is linear, it could easily be eliminated in practice by adjusting for it during the process of creating the ballistic table. In such a scenario, no ill effects result from the deviation in adjustment magnitude.

Minox ZP8 1-8x adjustments

Minox ZP8 1-8x adjustments

 

Close Quarters Speed Testing and Illumination

My initial expectation for the Minox in close quarter testing was that it would be rather slow. This expectation was based on the conventional crosshair type reticle design, and illumination that initially did not seem bright enough to me.

Though the illumination of the Minox is based on the new diffraction grating technology you may have heard me talk about in my SHOT 2014 blog, a technology capable of being obviously daytime bright, Minox initially did not provide a setting which had this obvious daytime pop. I thought it would not be enough and, for the most illumination sensitive of our testers, it wasn’t. However, for all other testers the scope functioned as daytime bright even when testing was in the direction of the sun. My conclusion was that the illumination was bright enough to get most users’ attention in daytime use and my initial impression was therefore incorrect, though common. In the end, it was a common enough impression that Minox has since decided to include brighter illumination settings anyway, rendering arguments on the point moot. I should mention that this daytime bright dot is in addition to a non-daytime bright illumination of the reticle scheme and that which illumination is present automatically switches with the power ring at about 2x. This is a very seamless and intuitive way to accomplish the switch.

Similar to my mistaken expectations regarding the illumination sufficiency, I proved to be too hasty on the reticle judgment as, at 1x, it was so fine as to not be a distraction to close quarters use. Furthermore, at low power the scope did not have any of the issues with stray light it had exhibited at high power, so do not expect any whiteout lens flare from difficult sun angles. Only minimal image distortion was present (there is always some distortion in any scope), allowing for good left/right eye merging in both eyes open use. Field of view was middle of the road, which is pretty good when you’re the only 1-8x scope in a lineup that includes the massive FOV of the unconventional Elcan Specter DR design. Eyebox at 1x was in line with what I am used to from 1-4x and 1-6x designs and therefore better than 1-8x designs I have tested in the past.

On balance, the Minox averaged on the faster side of a test group which was notably skewed toward scopes that have, in the past, been on the better sides of previous close quarters testing. I believe all the scopes used as references against the Minox in this close quarters testing are actually in the top 25% of all scopes I have tested in that regard. Given that most 1-8x scopes I have tested in the past have not been in the top 50% of scopes tested, the Minox, being better than average for this difficult lineup did very well overall indeed and I expect will do even better with the brighter illumination setting.

 

The Minox ZP8 1-8x during close quarters testing

The Minox ZP8 1-8x during close quarters testing

 

Summary and Conclusion:

It is difficult to write this conclusion as it is always hard to be objective when you are emotionally involved. Having seen prototypes for years along the way and following the twisting and turning of who works for whom and is producing what, I really wanted something that was probably entirely unrealistic. I wanted a 1-8x that was the size and weight of a 1-6x and had the mechanical performance of a sniper scope, all with uncompromising optical performance. This is wholly unrealistic and none of the 1-8x designs that I have tested in the past have come close to achieving that. In practice, the Minox represents a 1-8x of similar size and weight to most other scopes in this category. I believe it is faster in close quarters testing all of those and I expect that if it were stacked up to them optically, it would be average. In short, I do not feel that any of the 1-8x designs on the market has yet achieved the goals of the one rifle one optic dream. However, this is one of the top 1-8x scopes, performing particularly well in close quarters testing and also having a good feature set for use at range.

 

Here is Your Pro and Con Breakdown:

Pros:
Excellent close quarters performance, probably the best 1-8x in this respect
High resolution, low distortion, and low chromatic aberration
Diffraction grating based illumination system that will be daytime bright on new scopes
Illumination system automatically switches from dot to whole reticle with the power ring
.1mil per click, 10 mil total locking click adjustments
Reticle was spot on accurate and had no measurable cant
Included Tenebraex caps and a good warranty make for nice extras

Cons:
Weight, size, and cost are in line with other 1-8x designs – high
Small eyebox at high power makes it uncomfortable to be behind
I can’t find myself loving any of the reticle designs
Illumination on the first production run may not be seen as daytime bright by some users
Haze from stray light makes the image less dynamic at high powers > about 6x
Adjustment accuracy deviation of 2% was high for precision rifle scopes

 

Testing Methodology:  Adjustments, Reticle Size, Reticle Cant

When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. About .2 or so mil of deviation is allowed from center in the erector, as it is difficult to do better than this because the adjustable V-block has some play in it. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the rig.

 

Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27x56

Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27×56

 

The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at a Horus CATS 280F target 100 yds down range as measured by a quality fiberglass tape measure. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.

 

Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18x44

Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18×44

 

The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. Since up for bullet impact means down for reticle movement on the target, the inversion is necessary. With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise, test platform. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test. It’s quite a lot of fun if you are a bit of a nerd like I am. After properly setting the parallax and diopter, I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. I then reverse the process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this way, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. After concluding the testing of adjustments I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target. Lastly, this test target has a reticle cant testing function (basically a giant protractor) that I utilize to test reticle cant. This involves the elevation test as described above, a note of how far the reticle deviates horizontally from center during this test, and a little math to calculate the angle described by that amount of horizontal deviation over that degree of vertical travel.

Testing a single scope of a given model, from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots I lusted over never made it to market. I can tell you from seeing the prototype that they were a good design, but they were also a ridiculously tight tolerance design. In the end, the average time of assembly was such that it did not make sense to bring them to market as they would cost more than it was believed the market would bear. This is a particular concern for scopes that have high magnification ratios and also those that are short in length. Both of these design attributes tend to make assembly very touchy in the tolerance department. This should make you, the buyer, particularly careful to test scopes purchased that have these desirable attributes as manufacturers will face greater pressure on this type of scope to allow looser standards. If you test yours and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it:  squeaky wheel gets the oil and all that.

Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center. The average deviation for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in lens position affecting both the same. In scopes that have had a reticle with error it has been of this variety, but less scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary as you move from erector center. The mean amount of reticle error is about .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, Affects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21cm at 1000 meters with a 168gr .308 load that drops 12.1 mils at that distance. That is a lot of drop and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope:  a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition.

 

Testing Methodology:  Comparative Optical Evaluation

The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained.  Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire, nor the resources, to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end, no agreement will be reached on the relative weights of different factors in analysis.

The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and the reasons for that.

The central technique utilized for this testing is comparative observation. One of the test heads designed for my testing apparatus consists of five V-blocks of which four are adjustable. This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optics performance and explains the reasons why.

 

A variety of optical test targets viewed through the Leupold Mark 6 3-18x44

A variety of optical test targets viewed through the Leupold Mark 6 3-18×44

 

Testing Methodology:  Close Quarters Speed

As with the assessment of optical performance, the assessment of close quarters speed is a subjective process similarly involving a lineup of comparison scopes. Each scope is affixed in turn to an airsoft AR15 and used to engage an array of targets arranged in a 180 degree field of fire from the shooter. The use of a quality airsoft rifle as the test vehicle both reduces cost and virtually eliminates safety concerns relating to such a wide field of fire. Central to this testing is the use of multiple shooters and multiple firing scenarios. These firing scenarios involve movement on the part of the shooter, mounting and unmounting of the rifle, target to target transitions, and firing from the non-dominant side. Scopes are assessed with and without their illumination active. The panel of shooters has proven invaluable in this testing as different shooters have proven to have different tolerances to differing aspects of optical comprises. For instance, I am relatively insensitive to poor or absent illumination but very sensitive to barrel and pincushion distortion of the image. One of my other testers is relatively insensitive to this distortion but very sensitive to bright illumination. I also have a tester who seems very sensitive to reticle design and another for whom eyebox seems quite crucial. Gathering together the rankings and comments of all these shooters on a test lineup of scopes after a few hours of swapping them on and off the test rifle provides enough data to write something intelligent on the general performance of a particular optic relative to its peers as well as some specific information about which aspects of design that scope excelled in or was deficient in.

 

An over-the-shoulder view of close quarters speed testing in progress

An over-the-shoulder view of close quarters speed testing in progress

 

Sep 102015
 
BigJimFish logo

BigJimFish logoReview of the Leupold Mark 6 3-18x44mm Illuminated Optic

Les (Jim) Fischer
BigJimFish
July 10, 2015

 

Table of Contents:
– Background
– Unboxing and Physical Description
– Reticle
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
– Testing Methodology:  Adjustments, Reticle Size, Reticle Cant
– Testing Methodology:  Comparative Optical Evaluation

 

Background:

The tactical community, like any other, has trends, in-crowds, and must-have status symbols. This was a bit surprising to me at first:  everybody walking around tradeshows with the same pants (5.11) and backpacks (Eberlestock). But, people are people, and in many ways those who inhabit one industry are no different than those in another. This trendiness does not just apply to the styles worn. Rifle makers, accessory manufacturers, stock makers, and many others do not display their wares alone but rather fully decked out to make them look cooler and more ready to rock and roll. This entails choices as to which other products to have in your display. These choices can reflect well or poorly on the brand in question based on their quality, suitability, and, most importantly, whether or not they represent an up-to-date knowledge of the products in favor with others in the industry:  You don’t want to be caught wearing last year’s optics do you?

 

I mention all of this not just as social commentary or amateur psychology, though I obviously find it amusing and informative as such, but more importantly because it applies to this product specifically. Judging by the choices of makers in their displays as well as articles in gun rags, this Leupold Mk 6 3-18x is the must-have optic of the industry. I am not surprised. The Leupold name is such that even prior to the launching of the tactical division, when it’s tactical products were clearly out of date, Leupolds could still be found in many displays. Now this division is producing products that are not only up-to-date in terms of features, but also are quite aggressively designed. At less than 1 ft in length and 23.6 oz in weight, the Mk 6 3-18x is smaller and lighter than virtually any competing product. The benefits of this are obvious to anyone carrying it and, as we will discuss later, it is not easy to design, and even more difficult to manufacture a scope with these sort of dimensions. It is therefore not surprising to me that this optic has become the darling of the industry. I for one was immediately taken with it and I have looked forward to few reviews as much as this one.

 

Unboxing and Physical Description:

Unboxing a brand new Leupold can be one of the great joys of reviewing rifle scopes. The Mk 8 1-8x in particular was a gem of packaging perfection, coming, as it did, with perfectly cut foam displaying the product and each of its many extras. Despite being near the same price ($3.2k for the illuminated M5B2 knobs and TMR reticle version I tested) the Mk 6 3-18x was very differently packaged. Its packaging is quite basic. It came in a box with padded foam end caps, some manuals, battery, hex wrench, plastic Butler Creek caps, and a bumper sticker – like just about every scope I have reviewed except that the box was actually a little on the small side and the elevation knob impinged on the top. I found the experience a bit odd given the elaborate packaging in the 1-8x and even the 1-6x scopes of theirs that I have previously reviewed:  those scopes pretty much came in a display case while this had a box that was too small. I wonder how that happened?

 

Leupold Mark 6 3-18x with box and accessories (Bobro Dual Lever 34mm mount pictured is not included)

Leupold Mark 6 3-18x with box and accessories (Bobro Dual Lever 34mm mount pictured is not included)

 

Anyhow, the optic will be the only thing that matters in the end. As mentioned previously, this one is exceptionally small and light. This is certainly the first thing that will strike the user and the impression will be a dramatic one. It really is so much smaller and lighter than what you are used to that it will come as something of a shock. Other features that the user may find somewhat unusual are the elevation knob and the intricately hinged battery door. This optic comes in two elevation configurations. The original elevation configuration, which I have, is called the M5B2 knob. This knob locks at any position and must be squeezed while rotated. It features a zero stop that is adjusted using one large hex set screw instead of the three tiny ones that you have probably seen on many other optics. I consider this an advantage in both ease of use and probably durability as well. The M5B2 knob also has a tactile revolution indicator that indicates to the user what revolution the knob is on. It is a two revolution 10 mils per turn knob. The last, and probably most interesting, feature of this elevation knob is the external, tool-lessly repositionable or changeable scale. This ring is held on by two spring-loaded pins which, when depressed, allow the zero indicator to be moved or the whole ring to be replaced with another. This allows the user to set whatever mils below zero stop are desired or two replace the generic indicator ring with any of a number of existing or custom BDC rings. It’s a pretty nifty and feature-laden elevation knob. The other elevation knob available in this optic is called the M5C2 knob and is a low profile option. Like the M5B2, it is a zero stop, two revolution, 10 mils per click unit with turn indicator, but it is much smaller, only locks at zero, and features no movable or removable target scale. In either case, the scope features a capped, low profile 5 mils each way windage knob. The Mark 6 diopter is a very nice locking euro style unit and the power change rings grip is nice, large, and easy to grip. Overall, the features on this scope are quite up-to-date with current market preferences right down to the 34mm tube size.

 

Reticle:

I would love to say that a wide variety of very interesting reticle options exist for this unique optic, but that is not the case. There are only two reticles for the illuminated version of this optic which I will discuss. Depending on which elevation knob you choose, there are 6 options for the un-illuminated Mark 6. The reticles are:  the TMR (comes illuminated), CMR-W 7.62, CMR-W GRID, H-58, H59, and TREMOR 2 (comes illuminated). The TMR, by far the most common choice, is a simple mil hash reticle with divisions every .5 mils, a few sections with divisions at .2 mils, and no graduation labels. It is not a very artful affair, with little variation in line widths, making it rather thick in the center at max power, and has no Christmas tree or rapid ranging features. It plainly isn’t what anybody really wants, but it will work for most everybody and can be had in every possible configuration. The TREMOR 2 is Horus’s latest grid type reticle. If you are not familiar with the Horus concept of drop and drift compensation, you should check it out as it is an interesting alternative to the far more common dialing of drop and holding of drift that most shooters do. In the case of this optic, it is also a $1,250 up-ding in price.

 

Comparative Optical Evaluation:

Going into this evaluation I had absolutely no idea what to expect. From my previous work with the Mark 8 and Mark 6 lines, I knew Leupold put top flight glass into these products, but the extreme size and weight difference of the 3-18x when compared to all of the scopes with which it competes was an argument in the other direction as regards optical performance. You see, making a scope short, in particular, is difficult as it requires the light to be bent at more dramatic angles upon entering the optic. This can be mitigated somewhat by the addition of entire lens groups in place of single lenses, allowing each to do only a little bending. This is why many short optics are actually exceptionally heavy. This Leupold is both short and light; a very difficult thing to do from an optics design standpoint. Every aspect of design is complicated by this as distortions originating in wavelength differences between different colors, spherical vs. parabolic lenses, and imprecise positioning of the lenses are all effected. Furthermore, assembly is at least as dramatically affected because tolerances dwindle to almost nothing. I have viewed some very pricey short scopes that suffered the ravages of compact stature and were borderline unusable, so I was apprehensive when approaching this section of the test. Would the little Mark 6 perform or was it more of a novelty than a contender?

 

At the time I tested the optic, I had quite a variety of optics on hand to compare side by side with it:  the Vortex Razor HDII 5-25×56, USO LR-17 3.2-17×44, Nightforce SHV, Burris XTR II 4-20×50, and an older Zeiss Conquest 4.5-14×44. This suite of test optics varied widely in price and included both scopes aimed at the tactical market and those designed to appeal to hunters. To learn more about the exact methodology of the testing, please refer to the testing methodology section at the conclusion of the article.

 

The comparison lineup from left to right- Vortex Razor HDII 5-25x56, Nightforce SHV 4-14x56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17x44, Leupold MK6 3-18x44 not pictured* Zeiss Conquest 4.5-14x44.

The comparison lineup from left to right- Vortex Razor HDII 5-25×56, Nightforce SHV 4-14×56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17×44, Leupold MK6 3-18×44 not pictured* Zeiss Conquest 4.5-14×44.

 

Pretty early on in the optical evaluation, it became apparent that the scopes were sorting themselves into three groups. The USO and Vortex were clearly optically superior to the others. They had bigger fields of view, higher resolution, better contrast, and lower chromatic aberration. They were also very close to each other in performance. After a bit of a gap in performance, the next group was also very close to each other and included the Leupold, SHV, and Zeiss. The Burris brought up the rear, not really comparing closely with anything else in the analysis despite its price being very close to that of the SHV and almost double that of the Zeiss. Because of these clear tiers, I spent most of my time comparing the Leupold to the Nightforce and the Zeiss. Interestingly, these scopes, which were closest in terms of performance, also had the greatest disparity in price and features.

 

Optically, the Leupold performed in the middle of the pack in a lot of ways, which was surprising given the extreme nature of this scope in terms of cost, mass, and stature. They eyebox, which determines so much of the user experience, was in the middle of the scopes tested regarding comfort. It was not as roomy, neutral, and comfortable as the Nightforce, but was much better than that of the Burris and somewhat similar to the USO in feel. Quite usable and comfortable overall and much better than what I have encountered in many other short scopes, most of which I have found borderline unusable. This was a great relief to me as there is really no merit attractive enough to make me want to use a scope that is uncomfortable to look through. Similarly, the Leupold was somewhere on the better side of the middle of the pack regarding chromatic aberration, field of view, and resolution, with color rendition being better still. In general I found it to be the best of the three scopes in its group, though significantly behind the USO and Vortex. The only place where its short stature unmistakably hampered its performance was depth of field. It had by far the shallowest depth of field of any scope tested, with objects in front and behind the focus point being clearly out of focus. Interestingly, the throw on the parallax knob of this scope is only 45 degrees, further underlying the touchy tolerances inherent in the design.

 

I suppose the Mark 6 3-18x’s optical performance could be viewed as a smashing success or ignominious failure with reasonable and compelling arguments on both sides. At $3,250 as tested, it is substantially the most expensive scope tested yet did not perform even within the same bracket as the other costly scopes. Perhaps it is therefore a failure. Yet, at less than 1 foot in length and 23.6 oz in weight, no other scope even approaches its diminutive size and only the dramatically less feature rich Zeiss is competitive in weight. So, we could just as convincingly argue that the Mark 6 must be the most impressive by far as it performs on the better side of the middle of the pack optically yet saves such immense size and weight vs. all similarly featured contenders. I wonder how each of these arguments strikes you?

 

Mechanical Testing and Turret Discussion:

The M5B2 knobs on the version of the Mark 6 3-18x I tested have proven to be somewhat controversial. Though the M5B2 adjustments are in the middle of the pack size-wise for tactical knobs and substantially lighter because Leupold uses some exotic aluminum alloys in place of the brass more common in the industry for many adjustment parts, many people were interested in slimmer adjustments. This was presumably to complement the overall diminutive stature of the optic, and the M5C2 knobs, which are a variation on the MBC1 knobs available on the Mark 6 1-6x, are now available. In addition to the size of the M5B2 knobs, the pinch and turn locking mechanism and removable scale have also divided users. For my part, I rather like these features. The size seems about right for a tactical knob that will get a good deal of use, the pinch and turn system strikes me as middle of the road in the balance between convenience and avoidance of accidental adjustment, and I like the removable scale for the ease of which I can choose exactly how far below zero I want to set my elevation. I also like the turn indicator system and the clarity and size of the scale markings which, being on the removable collar, also make the addition of a custom scale easy. I might actually get used to that since it contains many of the features of a dope card. The feel of the turrets, though generally a subjective aspect of design, is probably universally found to be distasteful on the Mark 6:  they are really quite mushy. The clicks are very definitely audible but more on the fence about being tactile. The locking nature is also less positive than most. While locked, the turrets can be manipulated within the full .1mil of the click they are on and the reticle does move with this play. I do not feel this is a functional issue, but it’s not the feel you are going for. So, the form, fit, function, and feel of the M5B2 turrets is controversial one. Personally, I like the form and function, but find the fit and feel to be lacking.

 

My initial mechanical testing with the Mark 6 3-18x did not go well. By 5 mils of travel, the scope had gained .2 mils and read 5.2 on the target. Reading 10 on the knob, it was at 10.6 on the target, and at 14.2 it was at 15. The windage adjustment was similarly off. This magnitude of deviation in travel was much larger than the deviation in reticle size of about 1% to the large side. Other aspects tested faired better, with cant insignificant at about .4% and total travel of about 15.4 mils with another 1.1mil in adjustment movement past where reticle movement ceased. This travel was significantly better than the spec I have found for the optic.

 

Obviously, I found this degree of deviation (2-5+% depending on distance from optical center) to be well beyond acceptable in a precision rifle scope. At 800m with a .308, you would be somewhere around a half meter off. That is enough to miss about any target. I forwarded my findings and methodology to the representative at Leupold and he requested I send the optic back to have a look at it, for which he provided a label. I sent the scope in right away and received it back in a little less than a month, shipping time included. I checked the serial number and the optic received back was the same one sent in. The explanation for the deviation I received from Leupold was that the wrong adjustment screws were installed on a few early examples and mine had been one of those. I do not feel great about this. Assembly errors and sloppiness will occur at some rate regardless of the quality of a shop. That is the purpose of QC checking using collimators or less sophisticated testing mechanisms such as the one I use in this review. The fact that not only were the wrong adjustment screws used, but also the resulting significant deviations were not discovered is worrisome.

 

Upon return from Leupold, no such adjustment deviation was present. Actually, not only was no deviation detectable in the adjustment magnitude, but also no deviation was detectable in reticle size or reticle cant. Apparently, while switching out the adjustment screws they made a few other improvements as well. Certainly Leupold’s service left nothing to be desired.

 

Leupold Mark 6 3-18x showing M5B2 elevation knob with scale removed and hinged illumination battery door open

Leupold Mark 6 3-18x showing M5B2 elevation knob with scale removed and hinged illumination battery door open

 

Summary and Conclusion:

At the outset of this review the question hanging in the air was whether the Mark 6 3-18x would be the end-all, be-all mid-powered tactical scope superior to all others because it had all the features, all the performance, and none of the weight or whether it would just be another overly ambitious ultimately unusable design. The truth turns out to be more complicated. The Mark 6 3-18x is optically good, but more comparable with scopes in the $1.5k price range than those at $3k. In use, it is comfortable for the eye but a bit touchy on the parallax. The all important elevation knob is feature rich but also poor in feel. Notably, the cost for the illuminated version as tested ($3.2k) is a great deal more than an otherwise identical model without lighting ($2.2k). Initially, I thought this might just be business majors doing what they do, but I have reconsidered and expect that the aggressive short design probably makes the installation of the lighting complicated:  increasing assembly time and therefore cost.

 

Regarding the mechanical performance, since you are reading this and therefore probably read a great deal of long range shooting material, you have probably been admonished many times about testing the tracking of your optic. I hope my experiences when testing a brand new Mark 6 straight out of the box have been illustrative. You really can’t take the performance of your equipment for granted. It must be tested. Leupold certainly made things right when presented with my tests, but had I not been so thorough and instead set my 100 yard zero, grabbed a calculator-generated dope card, and just marched off to vaporize some gophers or compete, all I would have succeeded at is kicking up little dust clouds well behind my targets.

 

Personally, at the conclusion of the review, I am at least as enamored with this optic as I was at the start. To my thinking, many scopes from many countries and brands have become quite optically excellent at the cost of tremendous weight. The Mark 6 3-18x is very unique in delivering good optics as well as all the necessary tactical features in a small light package. It is an absolute no-brainier for anybody using an accurized AR for longer range varmint duty or for a long range hunter who anticipates having to carry his rifle significant distances. I suspect there are also quite a number of servicemen who, being loaded with every contraption ever devised by men who never had to carry them, just does not want one more damned heavy thing to lug around. There will probably never be an optic that is the end-all, be-all, as it has always been said in optics that you get what you pay for. The Mark 6 3-18x is no exception to this rule though it is unique because, while every other contender has tried to optimize optical performance with no limit to size and weight, this design instead optimizes size and weight at some cost to optical performance. That is a price I am willing to pay.

 

Here is Your Pro and Con Breakdown:
Pros:
Lighter and smaller than most anything it competes with
-Comfortable to use
-Good optical performance
-Full set of tactical features desired by long range practical shooter
-Solid brand with excellent warranty
-Some unusual and innovative elevation knob features
-Did I mention it’s really small and light?

Cons:
Illumination costs $1k extra
-Elevation knob feel is poor
-Optical performance is lower than other scopes at its price point
-Touchy parallax knob
-Reticles are less than compelling
-May be some issues with QC

 

Testing Methodology:  Adjustments, Reticle Size, Reticle Cant:

When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. About .2 or so mil of deviation is allowed from center in the erector as it is difficult to do better than this because the adjustable V-block has some play in it. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the rig.

 

Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27x56
Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27×56

 

The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at a Horus CATS 280F target 100 yds down range as measured by a quality fiberglass tape measure. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.

 

Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18x44
Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18×44

 

The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. Since up for bullet impact means down for reticle movement on the target, the inversion is necessary. With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise, test platform. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test. It’s quite a lot of fun if you are a bit of a nerd like I am. After properly setting the parallax and diopter, I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. I then reverse the process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this way, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. After concluding the testing of adjustments I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target. Lastly, this test target has a reticle cant testing function (basically a giant protractor) that I utilize to test reticle cant. This involves the elevation test as described above, a note of how far the reticle deviates horizontally from center during this test, and a little math to calculate the angle described by that amount of horizontal deviation over that degree of vertical travel.

 

Testing a single scope of a given model, from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots I lusted over never made it to market. I can tell you from seeing the prototype that they were a good design, but they were also a ridiculously tight tolerance design. In the end, the average time of assembly was such that it did not make sense to bring them to market as they would cost more than it was believed the market would bear. This is a particular concern for scopes that have high magnification ratios and also those that are short in length. Both of these design attributes tend to make assembly very touchy in the tolerance department. This should make you, the buyer, particularly careful to test scopes purchased that have these desirable attributes as manufacturers will face greater pressure on this type of scope to allow looser standards. If you test yours and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it:  squeaky wheel gets the oil and all that.

 

Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center. The average deviation  for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in lens position effecting both the same. In scopes that have had a reticle with error it has been of this variety, but less scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary as you move from erector center. The mean amount of reticle error is about .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, effects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21cm at 1000 meters with a 168gr .308 load that drops 12.1 mil at that distance. That is a lot of drop and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope:  a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition.

 

Testing Methodology:  Comparative Optical Evaluation

The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained.  Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire, nor the resources, to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end, no agreement will be reached on the relative weights of different factors in analysis.

 

The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and the reasons for that.

 

The central technique utilized for this testing is comparative observation. One of the test heads designed for my testing apparatus consists of five V-blocks of which four are adjustable.  This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optics performance and explains the reasons why.

 

A variety of optical test targets viewed through the Leupold Mark 6 3-18x44
A variety of optical test targets viewed through the Leupold Mark 6 3-18×44

 

Sep 102015
 
BigJimFish logo

Review of the Nightforce SHV 4-14x56mm Illuminated Optic

Les (Jim) Fischer
BigJimFish
July 3, 2015

 

Table of Contents:
– Background
– Unboxing and Physical Description
– Reticle
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
– Testing methodology:  Adjustments, reticle size, reticle cant
– Testing methodology:  Comparative optical evaluation

 

Background:

In the past few years, Nightforce, like many other scope makers, has dramatically increased the variety of offerings from two main scope lines to five with a few extras thrown in. Most of these new lines, such as the ATACR and Competition lines, are more or less higher end updates to existing lines. The SHV line instead of adding features, subtracts them:  with some of the cost being one of those things subtracted.

 

Basically, the SHV line is a less feature rich version of the NXS line. It has simplified adjustments, less magnification range, and is a little less over-built (apparently not much as the weights of similar models are almost the same across the two product lines.) The one I will be looking at is a 4-14x that has an erector ratio of 3.5x, whereas the closest NXS is 3.5-15x has an erector ratio of 4.3x. These two scopes are quite similar in appearance, being exactly the same length, having the same size objective, and being only 2.5oz difference in weight. The lineage is obvious. The difference is that the NXS has more options, higher clicks per rev zero stop tactical knobs, the greater power range, and costs about $700 more.

 

So the concept is simple. You have a market of mostly hunters who want the Nightforce name and quality but aren’t so keen on the high price. They really only use their adjustments for zeroing the scope, so why not make a model with simple adjustments that is more affordable. In a nutshell – that’s the SHV line.

 

Unboxing and Physical Description:

A few years back Nightforce abandoned the strange triangular box that used to distinguish their product and drive anyone trying to stock it to madness, so at this point there is not much to say about the box. Inside, the scope comes with rather sparse manuals, generous amounts of bumper stickers, and the rubber bikini covers that most NF scopes seem to come with. The gem of the extras may be the little baggie with self-contained lens cleaning cloth (though I am always afraid to use this as intended since I get worried it will pick up abrasive dust hanging on my rig or in my pocket, so I usually put it in a plastic baggie, which kind of defeats the purpose). Anyhow, notably in the fairly sparse documentation is a dimensioned reticle diagram which is a whole lot more useful than 30 pages of warnings telling you in tortured redundancy not to shoot yourself or others and also not to use the scope to stare at the sun.

 

Nightforce SHV Unboxing

Nightforce SHV Unboxing

 

The optic itself looks very much like the familiar NXS which, is not surprising since it is virtually identical in size and shape. The obvious difference is the small capped adjustments on the SHV. I probably shouldn’t say small as, for capped adjustments, they are fairly large. They are, with cap on, almost as large the NXS exposed adjustments. Removing the cap reveals that they can be adjusted without tools and have a resetable zero, though, it requires a tool to do the zero reset. This tool requiring system seems unnecessary to me as they are capped anyway and a pull up, push down system would have been easy to implement and is present on many competing designs. Perhaps that would have been a substantial bump in price, price being the great advantage of the SHV line:  it really is a lot cheaper than an NXS. The scope has a 30mm tube, is 14.8″ long, has a 56mm objective, and weighs in at 28.5oz. The sum of this is that it would be on the lighter side for a tactical optic but is both heavy and large for the hunting class. Two features from the tactical lineage remain in the option for illumination and the matched reticle and adjustment units. These are true MOA in the case of my test unit. The scope focuses down to 25M, which I always like to see, and has the euro style fast focus diopter that I prefer and has become almost ubiquitous.

 

Reticle:

One of the differences between the SHV and NXS lines is the more limited options in the reticle department. They generally come in only the MOAR and IHR reticles. The IHR is basically a descendent of the  German #1, or 3 post reticle. As such, it does not offer range finding or drop compensation capabilities. The MOAR is a ladder type reticle with 1 moa graduations. This reticle has proven to be Nightforce’s most popular choice on most of its models. Unlike most reticles in the market, the MOAR strikes me as very carefully designed with respect to line widths, graduation size, clutter minimization, and general appearance, though I am not a fan of true MOAs as a graduation dimension:  the math to range find with it is very cumbersome relative to mils or IPHY. I will admit that when I was designing my reticle, the MOAR was one of a set of reticles I referenced when deciding the angular subtension (thickness as it appears to the user) of some of the lines. The illumination on the model with that feature lights only the central crossing portion of the reticle as is probably most common in hunting optics. This illumination is of the reflected technology used in virtually all high powered optics. The reticle had no measurable cant relative to the adjustments in testing.

 

Comparative Optical Evaluation:

At the time I tested this optic, the optics that I had on hand, and therefore was able to compare it to were:  the Vortex Razor HDII 5-25×56, USO LR-17 3.2-17×44, Leupold MK6 3-18×44, Burris XTR II 4-20×50, and an older Zeiss Conquest 4.5-14×44. This suite of test optics varied widely in price and included both scopes aimed at the tactical market and those designed to appeal to hunters. To learn more about the exact methodology of the testing, please refer to the testing methodology section at the conclusion of the article.

 

The comparison lineup from left to right- Vortex Razor HDII 5-25x56, Nightforce SHV 4-14x56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17x44, Leupold MK6 3-18x44 not pictured* Zeiss Conquest 4.5-14x44.

The comparison lineup from left to right- Vortex Razor HDII 5-25×56, Nightforce SHV 4-14×56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17×44, Leupold MK6 3-18×44 not pictured* Zeiss Conquest 4.5-14×44.

 

Pretty early on in the optical evaluation it became apparent that the scopes were sorting themselves into three groups. The USO and Vortex were clearly optically superior to the others. They had bigger fields of view, higher resolution, better contrast, and lower chromatic aberration. They were also very close to each other in performance. After a bit of a gap in performance, the next group was also very close to each other and included the Leupold, SHV, and Zeiss. The Burris brought up the rear, not really comparing closely with anything else in the analysis despite its price being very close to that of the SHV and almost double that of the Zeiss. Because of these clear tiers, I spent most of my time comparing the Nightforce to the Leupold and the Zeiss and contemplating the implications of this since the Leupold costs nearly 2x as much as the SHV without illumination and 3x as much with. The Zeiss, when it can still be found, costs a bit over 1/2 as much as the SHV. This is quite a price disparity for optics that are very similar in optical performance, owing to the fact that though the most similar in optical performance to each other, these three were also the least similar in features.

 

The best thing the SHV had going for it in the testing is that it is very comfortable and easy to get behind. The eyebox is not critical at all and instead gives the user a good bit of movement latitude without much distortion. Similarly, the image though the SHV is also very flat and distortion free. There is really no noticeable curvature of field in the SHV, so the whole field of view appears in focus at the same time and same head position. Adding to this appearance, this scope has great depth of field so objects that are substantially different distances from the user often appear simultaneously in focus. The user should be mindful of this forgiveness when using the optic, as parallax error is easy to introduce when you have so much latitude regarding head position and depth of field without making any adjustment:  the parallax is still there even if things all appear in focus.

 

Flatness, I would say, was a recurring term for this optic. While this was good regarding depth of field and curvature of field, it was not good when it came to color rendition. The Nighforce generally muted colors and beat only the Burris in rendering the color blue, the primary color most scopes had the most trouble with, rendering it instead black. Also in the color department, the SHV had noticeable chromatic aberration with green tending to bleed out above dark lines and violet below. Both it and the Leupold showed this noticeable CA that was not present in the Zeiss, USO, or Vortex. The magnitude was such that it would be noted if you were not looking for it. The SHV was less plagued by this CA than the Burris, however. The SHV handled resolution and contrast comparatively much better, finishing just behind the much more expensive Leupold on both and ahead of the Zeiss and Burris.

 

All in all, my feeling regarding the optics of the SHV was that they were very comfortable, a bit dull, and generally solid performance wise. Judging performance / cost I found difficult as the scopes it competed with most closely span such a huge range of cost. I think it is probably a more useful statement to say that if you pick up this scope you will find it very comfortable to be behind and you will not be dissatisfied with the optical performance:  it is solid but not exciting.

 

 

Mechanical Testing and Turret Discussion:

In making an SHV from the blueprint of an NXS, I expect most of the money was saved in the adjustments. It should come as no surprise then that they are pretty basic in their features and don’t feel great. The design is of the kind popular a number of years ago with a single coin slotted fastener holding in place the graduated knob. Oddly, the knob actually is indexed and the oddity is that the index lines don’t well line up with the indicator makings, making you wish that they hadn’t bothered to index it into place and left you free to position it any way you wanted. The feel of the clicks is at the same time stiff and loose. with each click being stiff but with enough play between them to be able to move the knob a bit. The clicks are audible, though not particularly boisterous. The feel is otherwise best described as ‘dry’, I think. It’s sort of the opposite of that full-of-grease feeling you get with some other scopes.

 

When the adjustments were tested for deviance from stated magnitude, they were found to be uniformly 2% larger in magnitude than spec. For instance, 80 clicks, 20 MOA measured 20.4 MOA on the target. This was true for elevation as well as windage. Interestingly, the reticle also measured 2% larger and even the adjustment range came out to 102.05 MOA instead of the 100 MOA spec. The uniformity of this deviation brings me to speculate that whatever deviation from spec is responsible effected all three measurements in precisely the same way.

 

Though the adjustment magnitude was off by a bit more than average for the scopes I have tested, average being more like 1%, it was not all bad news mechanically. Despite being a 2nd focal plane scope, the Nightforce showed no shift in point of aim when the power ring was adjusted. This type of shift is more common than not in 2nd focal plane designs, though Nightforce specifically prides themselves in eliminating it:  something they proved to be quite adept at in my testing. This is important as a several MOA shift has not proven uncommon from other makers in past testing. I should also note that though the adjustment magnitude tested a bit large, the adjustments returned to zero without any problems and did not display any other issues in my testing.

 

 

Nightforce Adjustments with some disassembly

Nightforce Adjustments with some disassembly

 

 

Summary and Conclusion:

Of all the scopes I have ever tested this is probably the one that offered up the least surprises. That is neither a indictment nor an endorsement. It is perhaps an observation on the nature of optics in my experience. Most have just not been exactly what I expected even with a good deal of experience. The view is usually better or worse than you expect and often you get a nasty surprise such as improperly sized reticles, poor adjustment magnitude, or a heavily canted reticle. I got exactly what I expected from this Nightforce and was left with very much the same impression I had of it upon first picking it up at SHOT show. That opinion is of a solidly constructed optic that is very comfortable to get behind, but not particularly exceptional in any other way. The glass is good but not exceptional and the size and weight are more than that of a typical hunting rig.

 

I think the decision really comes down what you’re looking for. The SHV is exactly what you should expect it to be:  an NXS with minimal adjustments at a very aggressive price. I expect that is what a lot of people want. What it is not is a ground up designed hunting scope. It is bigger and heavier than that. It is also not just an NXS without the price. Its adjustments are a long way from that. It is an old saw in the optics industry that you get what you pay for and the SHV is perhaps the best exemplar of that maxim to date. It will be your bombproof reliable hunting scope at a fair price – if you’re willing to carry it.

 

Here is Your Pro and Con Breakdown:
Pros:
Exceptionally comfortable eyebox
-Aggressive pricing
-MOAR reticle and matching adjustments
-Illumination offered
-Solid but not exceptional optical and mechanical performance
-Nightforce reputation

Cons:
Heavy and large for a hunting scope
-Adjustment feel and accuracy are merely acceptable
-Few configuration options available

 

Testing Methodology:  Adjustments, Reticle Size, Reticle Cant:

When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. About .2 or so mil of deviation is allowed from center in the erector as it is difficult to do better than this because the adjustable V-block has some play in it. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the rig.

 

Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27x56
Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27×56

 

 

The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at a Horus CATS 280F target 100 yds down range as measured by a quality fiberglass tape measure. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.

 

Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18x44
Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18×44

 

The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. Since up for bullet impact means down for reticle movement on the target, the inversion is necessary. With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise, test platform. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test. It’s quite a lot of fun if you are a bit of a nerd like I am. After properly setting the parallax and diopter, I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. I then reverse the process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this way, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. After concluding the testing of adjustments I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target. Lastly, this test target has a reticle cant testing function (basically a giant protractor) that I utilize to test reticle cant. This involves the elevation test as described above, a note of how far the reticle deviates horizontally from center during this test, and a little math to calculate the angle described by that amount of horizontal deviation over that degree of vertical travel.

 

Testing a single scope of a given model, from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots I lusted over never made it to market. I can tell you from seeing the prototype that they were a good design, but they were also a ridiculously tight tolerance design. In the end, the average time of assembly was such that it did not make sense to bring them to market as they would cost more than it was believed the market would bear. This is a particular concern for scopes that have high magnification ratios and also those that are short in length. Both of these design attributes tend to make assembly very touchy in the tolerance department. This should make you, the buyer, particularly careful to test scopes purchased that have these desirable attributes as manufacturers will face greater pressure on this type of scope to allow looser standards. If you test yours and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it:  squeaky wheel gets the oil and all that.

 

Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center. The average deviation  for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in lens position effecting both the same. In scopes that have had a reticle with error it has been of this variety, but less scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary as you move from erector center. The mean amount of reticle error is about .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, effects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21cm at 1000 meters with a 168gr .308 load that drops 12.1 mil at that distance. That is a lot of drop and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope:  a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition.

Testing Methodology:  Comparative Optical Evaluation

The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained.  Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire, nor the resources, to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end, no agreement will be reached on the relative weights of different factors in analysis.

 

The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and the reasons for that.

 

The central technique utilized for this testing is comparative observation. One of the test heads designed for my testing apparatus consists of five V-blocks of which four are adjustable.  This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optics performance and explains the reasons why.

 

A variety of optical test targets viewed through the Leupold Mark 6 3-18x44
A variety of optical test targets viewed through the Leupold Mark 6 3-18×44

 

 

Sep 102015
 
BigJimFish logo

Review of the Burris XTR II 4-20x50mm

Les (Jim) Fischer
BigJimFish
July 3, 2015

 

Table of Contents:
– Background
– Unboxing and Physical Description
– Reticle
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
– Testing methodology: Adjustments, reticle size, reticle cant
– Testing methodology: Comparative optical evaluation

 

Background:

The Burris XTR II was a bit of a surprise for me at the 2014 Shot Show. I encountered the 4-20x model on the 1000-yard range and it intrigued me. The reason for this is that there are always lots of folks asking me for a long range optic in the $1k price range and there really aren’t many options. The only ones that spring to mind now are this Burris, the Vortex Viper PST line, and, for a little more, the SWFA SS 5-20 or various Bushnell Elite Tactical models. This is not a huge pool to choose from and I suspect it would be substantially smaller if the manufacturer was known as well as the brand, as I believe many of these brands use the same manufacturers. I shot the XTR II a little at the show and, so far as a person could tell in such a short exposure, it seemed to work quite nicely. I judged it well worth an in-depth review given the importance of the market segment and paucity of entrants.

 

The only thing I was initially hesitant of was that the XTR II’s manufacture is subcontracted. While this is the norm in the industry, it is a departure for Burris, who usually makes their own stuff. At the time I ordered the XTR II, I believed that it was a Japanese production. Many Japanese subcontractors are quite good in terms of both reliability and performance, so I did not hold this too much against Burris. I am not sure if I mistakenly assumed the Japanese origin or if I was misinformed, but the XTR II 4-20×50 is made in the Philippines, whose factories have neither of these reputations. That was an unfortunate thing to discover upon unboxing, but you never know:  I have been surprised lately at the quality of many Chinese products, so perhaps I was in for another pleasant surprise. Quality manufacturing facilities can be built anywhere and corporations have certainly reached the point of being super-national entities with countries serving more as a potential set of liabilities and costs for a corporation than a suite of assets.

 

Unboxing and Physical Description:

Inside the black, gray, yellow, and orange Burris box you will find, in addition to the scope, a user’s guide, battery, wrench for changing the zero stop, non-honeycomb sunshade, and some house knockoff Butler Creek flip caps. For a mid-priced optic, it is a pretty nice suite of extras, saving you the crazy amount of money that buying caps costs when you have to do it piecemeal and offering the unexpected bonus of a sunshade. The manual starts with a page of advertising which must presume that that you are in the market for quite a lot of new scopes, as you have clearly just purchased this one and continuing to advertise that same scope to you would be preaching to the choir. After this unexplained page of propaganda, the guide goes on to contain some generally useful information about scope operation. It’s a pretty good manual overall and doesn’t actually spend any time explaining to me that I am not to shoot myself or others or to use my scope laden rifle to spy on my neighbor sunbathing. However will I learn these things?!

 

Burris XTR II 4-20x50mm unboxing

Burris XTR II 4-20x50mm unboxing

 

The optic itself is styled most like a scope from the Nightforce NXS line, for which it could be mistaken at a distance. The power ring and parallax feel about right, the diopter turns a bit too freely, and the turrets themselves are quite stiff. This stiffness coupled with the patterning of the knobs make it so that you are well advised to get a full wrap on the things, else it will feel like you’re trying to tighten a saw blade by holding the teeth. These knobs are 8 mil per turn with a zero stop on the elevation and also a stop on the windage which limits the knob to 1 turn, 8 mils, each way (The 2015 update of this model has 10 mils per turn). I will also mention here the illumination control. It is a bit unusual in that the battery cap is also the entire knurled portion of the illumination control. The effect of this is that you can only loosen the cap at one end of the adjustment range and tighten it at the other. I also learned that these same extremes are the only true off positions for the illumination system. The off positions between each illumination setting are merely soft offs at which there is still some battery drain. This is apparently part of the digital illumination system that the scope has. While the operation appears analog to the user, internally it is not. This appears to be the downside of that whereas the upside is that it has an auto off feature after prolonged use to save batteries.

 

Reticle:

The test example I have of the XTR II 4-20×50 has a reticle called the G2B Mil-Dot which, as the name suggests, is the same Gen 2 Mil-Dot you have seen in many other makers’ scopes. At the time of my ordering the sample, I don’t believe there was another mil reticle option. There is another option now and it is called the SCR Mil.Mil, which is a ladder style mil reticle without a Christmas tree feature but with finer graduations that appear to be .1 mil. The SCR Mil also appears to have finer line widths. It is probably the option I would go for as I generally like fine line widths and tight graduations. An MOA version of the SCR reticle also exists that is paired with MOA knobs for those who prefer the imperialist way. Really, despite the paucity options, Burris has covered most users with these.  I would say well done – they must be paying some attention to the marketplace.

 

In testing, I found the reticle to be right on size-wise though slightly canted clockwise. I estimate the cant at .83 degrees. At this magnitude, that cant will cause a shot to go wide by .0145 mils for every 1 mil of drop. In the case of a 168gr .308 at 1000 yards with the correspondingly high 12.1 mils of drop, this only adds up to .174 mils or about 17 cm. While this is certainly measurable, it strikes me as a reasonable amount of deviation to have in a scope at this price point. Relatively speaking, the Burris reticle had a little more cant than any other reticle tested, but was one of only a few scopes to have the reticle sized close enough to true to have no measurable deviation using my testing equipment. It certainly came out on the sunny side of average.

 

Comparative Optical Evaluation:

At the time I tested this optic, the optics that I had on hand, and therefore was able to compare it to were:  the Vortex Razor HDII 5-25×56, USO LR-17 3.2-17×44, Leupold MK6 3-18×44, Nightforce SHV 4-14×56, and an older Zeiss Conquest 4.5-14×44. This suite of test optics varied widely in price and included both scopes aimed at the tactical market and those designed to appeal to hunters. To learn more about the exact methodology of the testing, please refer to the testing methodology section at the conclusion of the article.

 

The comparison lineup from left to right- Vortex Razor HDII 5-25x56, Nightforce SHV 4-14x56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17x44, Leupold MK6 3-18x44 not pictured* Zeiss Conquest 4.5-14x44.

The comparison lineup from left to right- Vortex Razor HDII 5-25×56, Nightforce SHV 4-14×56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17×44, Leupold MK6 3-18×44 not pictured* Zeiss Conquest 4.5-14×44.

 

Pretty early on in the optical evaluation it became apparent that the scopes were sorting themselves into three groups. The USO and Vortex were clearly optically superior to the others. They had bigger fields of view, higher resolution, better contrast, and lower chromatic aberration. They were also very close to each other in performance. After a bit of a gap in performance, the next group was also very close to each other and included the Leupold, SHV, and Zeiss. The Burris brought up the rear:  not really comparing closely with anything else in the analysis despite its price being very close to that of the SHV and almost double that of the Zeiss. Because of these clear tiers and price differences, I spent most of my time comparing the Burris to the SHV and Zeiss. Comparisons were done at a variety of magnifications, but because it was the highest magnification in common to all optics, 14x was used most extensively. It should be noted that unlike its two closest comparisons in price the Burris is a first focal plane scope. This is a feature it has in common with the much more pricey scopes in the lineup. Since FFP scopes are more difficult to manufacture with as high an optical performance as a comparable SFP scope but are more desirable to tactical shooters, some allowance must be made for the Burris on this account.

 

The first notable aspect of the Burris, optically, is the eyebox. The Burris had substantially the smallest eyebox of any scope in the lineup. This small eyebox combined with substantial curvature of field rendered no one head position sufficient to observe the entire field of view in focus at the same time. This is a problem I have noted with a few other scopes in the past, though it is by no means a common issue. As the user moves his head around in the eyebox, he will note different parts of the image coming into and loosing focus. It should also be noted that the Burris is on the small side for field of view, being greater than only the SHV in this set of comparisons. FOV is an important consideration for curvature of field since, larger FOV makes limiting curvature more difficult but is well worth the trade. This eyebox / curvature of field issue will be noted by the user even in the absence of comparison scopes. This is not the case for many other optical properties, such as resolution or contrast. It renders use of the scope an uncomfortable and straining experience that tires the user.

 

A second optical issue that will be noted on the Burris even in absence of comparison optics is the chromatic aberration. Dark areas in an image are noticeably tinged yellow on the right and violet on the left. The Burris had more dramatic CA than any other optic in the test group. The magnitude was dramatic enough to be noticeable at 4x. This is atypical for CA, it is usually only noticeable at high magnifications.

 

When the comparison optics were added to the testing, it became apparent that the Burris had the lowest resolution and contrast of the group. Neither of these was aided by the generally yellow and hazy appearance of the image through the Burris as relative to the other optics.

 

The bottom line for the XTR II is that even with some allowance for being an FFP optic compared most closely to SFP optics, I did not find it as good as it should be. I could probably forgive the general yellowness or haziness, but that wonky eyebox is hard to get behind. It is true that I have seen this problem before, in scopes that cost significantly more than this one, but it wasn’t acceptable in those either. If the scope had a giant FOV and the problem was limited to the bonus area that would be okay, but that is not the case here. It is just not good optical design. That eyebox coupled with the dramatic chromatic aberration made for a pretty unpleasant experience. The Burris XTR II 4-20x should simply be better than it is optically.

 

Mechanical Testing and Turret Discussion:

Up until the mechanical testing, the Burris was not fairing particularly well. As the knobs started to break in and the results started to come in, however, things began to change. While still rather jagged, the more the knobs were turned (and the lubricant thereby spread), the better the experience of using them was. They have a good audible click, though the feel of the clicks is rather lacking.

 

In the elevation test I found the Burris to have 14.7 mills (52.92 MOA) of elevation from optical center to stop. This was actually a bit more than spec so perhaps the spec is a little conservative and perhaps my center was a bit low, as I only center the scopes within +/- a few MOA, movement in the adjustable V-block making more accurate centering impracticable. For all 14.7 mils, the scope tracked perfectly and no deviation was notable using my equipment, despite the fact that this setup can easily distinguish less than .5% deviation over 10 mils. The Burris was actually the first scope in my test lineup to have its adjustment accuracy measured, so I thought I might be in for a really boring time after this result. That was not the case. At the time of this writing, the Burris is the only scope to test at less than 1% deviation (though the Zeiss was not tested and I have a little more testing to go with repaired scopes and second examples of scopes). The Burris was also clean 4 mils in each direction on the windage tracking and always returned to zero. In addition, there was no reticle movement with power change. This is not surprising on an FFP scope, though shift is common in SFP optics. Overall, the Burris tracked perfectly and was the only scope to do so.

 

Burris XTR II 4-20x50mm adjustments, parallax, and illumination controls

Burris XTR II 4-20x50mm adjustments, parallax, and illumination controls

 

Summary and Conclusion:

The most important part of a scope from the standpoint of a distance shooter is the mechanical accuracy. Many, many a missed shot that has been attributed to the shooter, the wind, the rifle, or the ammo, was, in point of fact, a result of a scope that tracked poorly. It is a small wonder to me that so few shooters actually test and verify their equipment. Related to this, it is also a belief of mine that the poor thoughts many have regarding the accuracy of ballistic programs result instead from scope adjustment problems. Burris did the best of any scope tested on adjustment accuracy, so good on them.

 

Despite the mechanical perfection of my test Burris, I still have many reservations about the scope in general. Perfection of the adjustments on this example does not mean they will all be that way. While having say a 5% deviation would be damning, as it means a scope that whacked out can escape QC, having one perfect scope is only a data point in the right direction. All makers will have amounts of measurable deviation deemed acceptable. This example is a good sign for the QC and standards of Burris, but by no means guarantees you the same good fortune on purchase.

 

On the flip side, the optics of this scope were not good. The eyebox, chromatic aberration, resolution, and contrast were all lackluster both in general and for the cost. While I do not expect all examples to be mechanically flawless, though the standards may be such that they might just all be at least mechanically good, I do expect all examples to be similarly lacking in optical performance.

 

I feel torn three ways. I think the features such as power range, 8 mil ZS turrets, side focus, illumination, and reticles add up to a middle of the road score. The optics were bad, but the mechanics excellent. I can’t therefore say it’s a poor choice or an excellent one:  it’s a compromise. I guess that is really what should be expected at the $1.1k price point.

 

Here is Your Pro and Con Breakdown:

Pros:
-The test scope tracked perfectly and was the only scope to do so
-Reticle also sized correctly and good reticle choices exist
-Affordable price point
-Zero stop
-Illumination
-Side focus
-Burris has a good customer service reputation

Cons:
-Optics were poor in terms of eyebox, resolution, contrast, chromatic aberration, field of view, and color
-Turrets are 8 instead of 10 mil (2015 model is 10 mils per turn) and don’t have great feel

 

Testing Methodology:  Adjustments, Reticle Size, Reticle Cant:

When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. About .2 or so mil of deviation is allowed from center in the erector as it is difficult to do better than this because the adjustable V-block has some play in it. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the rig.

 

Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27x56

Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27×56

 

 

The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at a Horus CATS 280F target 100 yds down range as measured by a quality fiberglass tape measure. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.

 

Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18x44

Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18×44

 

The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. Since up for bullet impact means down for reticle movement on the target, the inversion is necessary. With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise, test platform. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test. It’s quite a lot of fun if you are a bit of a nerd like I am. After properly setting the parallax and diopter, I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. I then reverse the process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this way, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. After concluding the testing of adjustments I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target. Lastly, this test target has a reticle cant testing function (basically a giant protractor) that I utilize to test reticle cant. This involves the elevation test as described above, a note of how far the reticle deviates horizontally from center during this test, and a little math to calculate the angle described by that amount of horizontal deviation over that degree of vertical travel.

 

Testing a single scope of a given model, from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots I lusted over never made it to market. I can tell you from seeing the prototype that they were a good design, but they were also a ridiculously tight tolerance design. In the end, the average time of assembly was such that it did not make sense to bring them to market as they would cost more than it was believed the market would bear. This is a particular concern for scopes that have high magnification ratios and also those that are short in length. Both of these design attributes tend to make assembly very touchy in the tolerance department. This should make you, the buyer, particularly careful to test scopes purchased that have these desirable attributes as manufacturers will face greater pressure on this type of scope to allow looser standards. If you test yours and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it:  squeaky wheel gets the oil and all that.

 

Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center. The average deviation  for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in lens position effecting both the same. In scopes that have had a reticle with error it has been of this variety, but less scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary as you move from erector center. The mean amount of reticle error is about .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, effects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21cm at 1000 meters with a 168gr .308 load that drops 12.1 mil at that distance. That is a lot of drop and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope:  a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition.

Testing Methodology:  Comparative Optical Evaluation

The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained.  Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire, nor the resources, to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end, no agreement will be reached on the relative weights of different factors in analysis.

 

The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and the reasons for that.

 

The central technique utilized for this testing is comparative observation. One of the test heads designed for my testing apparatus consists of five V-blocks of which four are adjustable.  This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optics performance and explains the reasons why.

 

A variety of optical test targets viewed through the Leupold Mark 6 3-18x44

A variety of optical test targets viewed through the Leupold Mark 6 3-18×44