Sep 102015
 
BigJimFish logo

Review of the Burris XTR II 4-20x50mm

Les (Jim) Fischer
BigJimFish
July 3, 2015

 

Table of Contents:
– Background
– Unboxing and Physical Description
– Reticle
– Comparative Optical Evaluation
– Mechanical Testing and Turret Discussion:
– Summary and Conclusion
– Testing methodology: Adjustments, reticle size, reticle cant
– Testing methodology: Comparative optical evaluation

 

Background:

The Burris XTR II was a bit of a surprise for me at the 2014 Shot Show. I encountered the 4-20x model on the 1000-yard range and it intrigued me. The reason for this is that there are always lots of folks asking me for a long range optic in the $1k price range and there really aren’t many options. The only ones that spring to mind now are this Burris, the Vortex Viper PST line, and, for a little more, the SWFA SS 5-20 or various Bushnell Elite Tactical models. This is not a huge pool to choose from and I suspect it would be substantially smaller if the manufacturer was known as well as the brand, as I believe many of these brands use the same manufacturers. I shot the XTR II a little at the show and, so far as a person could tell in such a short exposure, it seemed to work quite nicely. I judged it well worth an in-depth review given the importance of the market segment and paucity of entrants.

 

The only thing I was initially hesitant of was that the XTR II’s manufacture is subcontracted. While this is the norm in the industry, it is a departure for Burris, who usually makes their own stuff. At the time I ordered the XTR II, I believed that it was a Japanese production. Many Japanese subcontractors are quite good in terms of both reliability and performance, so I did not hold this too much against Burris. I am not sure if I mistakenly assumed the Japanese origin or if I was misinformed, but the XTR II 4-20×50 is made in the Philippines, whose factories have neither of these reputations. That was an unfortunate thing to discover upon unboxing, but you never know:  I have been surprised lately at the quality of many Chinese products, so perhaps I was in for another pleasant surprise. Quality manufacturing facilities can be built anywhere and corporations have certainly reached the point of being super-national entities with countries serving more as a potential set of liabilities and costs for a corporation than a suite of assets.

 

Unboxing and Physical Description:

Inside the black, gray, yellow, and orange Burris box you will find, in addition to the scope, a user’s guide, battery, wrench for changing the zero stop, non-honeycomb sunshade, and some house knockoff Butler Creek flip caps. For a mid-priced optic, it is a pretty nice suite of extras, saving you the crazy amount of money that buying caps costs when you have to do it piecemeal and offering the unexpected bonus of a sunshade. The manual starts with a page of advertising which must presume that that you are in the market for quite a lot of new scopes, as you have clearly just purchased this one and continuing to advertise that same scope to you would be preaching to the choir. After this unexplained page of propaganda, the guide goes on to contain some generally useful information about scope operation. It’s a pretty good manual overall and doesn’t actually spend any time explaining to me that I am not to shoot myself or others or to use my scope laden rifle to spy on my neighbor sunbathing. However will I learn these things?!

 

Burris XTR II 4-20x50mm unboxing

Burris XTR II 4-20x50mm unboxing

 

The optic itself is styled most like a scope from the Nightforce NXS line, for which it could be mistaken at a distance. The power ring and parallax feel about right, the diopter turns a bit too freely, and the turrets themselves are quite stiff. This stiffness coupled with the patterning of the knobs make it so that you are well advised to get a full wrap on the things, else it will feel like you’re trying to tighten a saw blade by holding the teeth. These knobs are 8 mil per turn with a zero stop on the elevation and also a stop on the windage which limits the knob to 1 turn, 8 mils, each way (The 2015 update of this model has 10 mils per turn). I will also mention here the illumination control. It is a bit unusual in that the battery cap is also the entire knurled portion of the illumination control. The effect of this is that you can only loosen the cap at one end of the adjustment range and tighten it at the other. I also learned that these same extremes are the only true off positions for the illumination system. The off positions between each illumination setting are merely soft offs at which there is still some battery drain. This is apparently part of the digital illumination system that the scope has. While the operation appears analog to the user, internally it is not. This appears to be the downside of that whereas the upside is that it has an auto off feature after prolonged use to save batteries.

 

Reticle:

The test example I have of the XTR II 4-20×50 has a reticle called the G2B Mil-Dot which, as the name suggests, is the same Gen 2 Mil-Dot you have seen in many other makers’ scopes. At the time of my ordering the sample, I don’t believe there was another mil reticle option. There is another option now and it is called the SCR Mil.Mil, which is a ladder style mil reticle without a Christmas tree feature but with finer graduations that appear to be .1 mil. The SCR Mil also appears to have finer line widths. It is probably the option I would go for as I generally like fine line widths and tight graduations. An MOA version of the SCR reticle also exists that is paired with MOA knobs for those who prefer the imperialist way. Really, despite the paucity options, Burris has covered most users with these.  I would say well done – they must be paying some attention to the marketplace.

 

In testing, I found the reticle to be right on size-wise though slightly canted clockwise. I estimate the cant at .83 degrees. At this magnitude, that cant will cause a shot to go wide by .0145 mils for every 1 mil of drop. In the case of a 168gr .308 at 1000 yards with the correspondingly high 12.1 mils of drop, this only adds up to .174 mils or about 17 cm. While this is certainly measurable, it strikes me as a reasonable amount of deviation to have in a scope at this price point. Relatively speaking, the Burris reticle had a little more cant than any other reticle tested, but was one of only a few scopes to have the reticle sized close enough to true to have no measurable deviation using my testing equipment. It certainly came out on the sunny side of average.

 

Comparative Optical Evaluation:

At the time I tested this optic, the optics that I had on hand, and therefore was able to compare it to were:  the Vortex Razor HDII 5-25×56, USO LR-17 3.2-17×44, Leupold MK6 3-18×44, Nightforce SHV 4-14×56, and an older Zeiss Conquest 4.5-14×44. This suite of test optics varied widely in price and included both scopes aimed at the tactical market and those designed to appeal to hunters. To learn more about the exact methodology of the testing, please refer to the testing methodology section at the conclusion of the article.

 

The comparison lineup from left to right- Vortex Razor HDII 5-25x56, Nightforce SHV 4-14x56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17x44, Leupold MK6 3-18x44 not pictured* Zeiss Conquest 4.5-14x44.

The comparison lineup from left to right- Vortex Razor HDII 5-25×56, Nightforce SHV 4-14×56, Burris XTR II 4-20x50mm, USO LR-17 3.2-17×44, Leupold MK6 3-18×44 not pictured* Zeiss Conquest 4.5-14×44.

 

Pretty early on in the optical evaluation it became apparent that the scopes were sorting themselves into three groups. The USO and Vortex were clearly optically superior to the others. They had bigger fields of view, higher resolution, better contrast, and lower chromatic aberration. They were also very close to each other in performance. After a bit of a gap in performance, the next group was also very close to each other and included the Leupold, SHV, and Zeiss. The Burris brought up the rear:  not really comparing closely with anything else in the analysis despite its price being very close to that of the SHV and almost double that of the Zeiss. Because of these clear tiers and price differences, I spent most of my time comparing the Burris to the SHV and Zeiss. Comparisons were done at a variety of magnifications, but because it was the highest magnification in common to all optics, 14x was used most extensively. It should be noted that unlike its two closest comparisons in price the Burris is a first focal plane scope. This is a feature it has in common with the much more pricey scopes in the lineup. Since FFP scopes are more difficult to manufacture with as high an optical performance as a comparable SFP scope but are more desirable to tactical shooters, some allowance must be made for the Burris on this account.

 

The first notable aspect of the Burris, optically, is the eyebox. The Burris had substantially the smallest eyebox of any scope in the lineup. This small eyebox combined with substantial curvature of field rendered no one head position sufficient to observe the entire field of view in focus at the same time. This is a problem I have noted with a few other scopes in the past, though it is by no means a common issue. As the user moves his head around in the eyebox, he will note different parts of the image coming into and loosing focus. It should also be noted that the Burris is on the small side for field of view, being greater than only the SHV in this set of comparisons. FOV is an important consideration for curvature of field since, larger FOV makes limiting curvature more difficult but is well worth the trade. This eyebox / curvature of field issue will be noted by the user even in the absence of comparison scopes. This is not the case for many other optical properties, such as resolution or contrast. It renders use of the scope an uncomfortable and straining experience that tires the user.

 

A second optical issue that will be noted on the Burris even in absence of comparison optics is the chromatic aberration. Dark areas in an image are noticeably tinged yellow on the right and violet on the left. The Burris had more dramatic CA than any other optic in the test group. The magnitude was dramatic enough to be noticeable at 4x. This is atypical for CA, it is usually only noticeable at high magnifications.

 

When the comparison optics were added to the testing, it became apparent that the Burris had the lowest resolution and contrast of the group. Neither of these was aided by the generally yellow and hazy appearance of the image through the Burris as relative to the other optics.

 

The bottom line for the XTR II is that even with some allowance for being an FFP optic compared most closely to SFP optics, I did not find it as good as it should be. I could probably forgive the general yellowness or haziness, but that wonky eyebox is hard to get behind. It is true that I have seen this problem before, in scopes that cost significantly more than this one, but it wasn’t acceptable in those either. If the scope had a giant FOV and the problem was limited to the bonus area that would be okay, but that is not the case here. It is just not good optical design. That eyebox coupled with the dramatic chromatic aberration made for a pretty unpleasant experience. The Burris XTR II 4-20x should simply be better than it is optically.

 

Mechanical Testing and Turret Discussion:

Up until the mechanical testing, the Burris was not fairing particularly well. As the knobs started to break in and the results started to come in, however, things began to change. While still rather jagged, the more the knobs were turned (and the lubricant thereby spread), the better the experience of using them was. They have a good audible click, though the feel of the clicks is rather lacking.

 

In the elevation test I found the Burris to have 14.7 mills (52.92 MOA) of elevation from optical center to stop. This was actually a bit more than spec so perhaps the spec is a little conservative and perhaps my center was a bit low, as I only center the scopes within +/- a few MOA, movement in the adjustable V-block making more accurate centering impracticable. For all 14.7 mils, the scope tracked perfectly and no deviation was notable using my equipment, despite the fact that this setup can easily distinguish less than .5% deviation over 10 mils. The Burris was actually the first scope in my test lineup to have its adjustment accuracy measured, so I thought I might be in for a really boring time after this result. That was not the case. At the time of this writing, the Burris is the only scope to test at less than 1% deviation (though the Zeiss was not tested and I have a little more testing to go with repaired scopes and second examples of scopes). The Burris was also clean 4 mils in each direction on the windage tracking and always returned to zero. In addition, there was no reticle movement with power change. This is not surprising on an FFP scope, though shift is common in SFP optics. Overall, the Burris tracked perfectly and was the only scope to do so.

 

Burris XTR II 4-20x50mm adjustments, parallax, and illumination controls

Burris XTR II 4-20x50mm adjustments, parallax, and illumination controls

 

Summary and Conclusion:

The most important part of a scope from the standpoint of a distance shooter is the mechanical accuracy. Many, many a missed shot that has been attributed to the shooter, the wind, the rifle, or the ammo, was, in point of fact, a result of a scope that tracked poorly. It is a small wonder to me that so few shooters actually test and verify their equipment. Related to this, it is also a belief of mine that the poor thoughts many have regarding the accuracy of ballistic programs result instead from scope adjustment problems. Burris did the best of any scope tested on adjustment accuracy, so good on them.

 

Despite the mechanical perfection of my test Burris, I still have many reservations about the scope in general. Perfection of the adjustments on this example does not mean they will all be that way. While having say a 5% deviation would be damning, as it means a scope that whacked out can escape QC, having one perfect scope is only a data point in the right direction. All makers will have amounts of measurable deviation deemed acceptable. This example is a good sign for the QC and standards of Burris, but by no means guarantees you the same good fortune on purchase.

 

On the flip side, the optics of this scope were not good. The eyebox, chromatic aberration, resolution, and contrast were all lackluster both in general and for the cost. While I do not expect all examples to be mechanically flawless, though the standards may be such that they might just all be at least mechanically good, I do expect all examples to be similarly lacking in optical performance.

 

I feel torn three ways. I think the features such as power range, 8 mil ZS turrets, side focus, illumination, and reticles add up to a middle of the road score. The optics were bad, but the mechanics excellent. I can’t therefore say it’s a poor choice or an excellent one:  it’s a compromise. I guess that is really what should be expected at the $1.1k price point.

 

Here is Your Pro and Con Breakdown:

Pros:
-The test scope tracked perfectly and was the only scope to do so
-Reticle also sized correctly and good reticle choices exist
-Affordable price point
-Zero stop
-Illumination
-Side focus
-Burris has a good customer service reputation

Cons:
-Optics were poor in terms of eyebox, resolution, contrast, chromatic aberration, field of view, and color
-Turrets are 8 instead of 10 mil (2015 model is 10 mils per turn) and don’t have great feel

 

Testing Methodology:  Adjustments, Reticle Size, Reticle Cant:

When testing scope adjustments, I use the adjustable V-block on the right of the test rig to first center the erector. About .2 or so mil of deviation is allowed from center in the erector as it is difficult to do better than this because the adjustable V-block has some play in it. I next set the zero stop (on scopes with such a feature) to this centered erector and attach the optic to the rail on the left side of the rig.

 

Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27x56

Test rig in use testing the adjustments of the Vortex Razor HD II 4.5-27×56

 

 

The three fine threaded 7/16″ bolts on the rig allow the scope to be aimed precisely at a Horus CATS 280F target 100 yds down range as measured by a quality fiberglass tape measure. The reticle is aimed such that its centerline is perfectly aligned with the centerline of the target and it is vertically centered on the 0 mil elevation line.

 

Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18x44

Horus CATS 280F target inverted and viewed though the Leupold Mark 6 3-18×44

 

The CATS target is graduated in both mils and true MOA and calibrated for 100 yards. The target is mounted upside down on a target backer designed specifically for this purpose as the target was designed to be fired at rather than being used in conjunction with a stationary scope. Since up for bullet impact means down for reticle movement on the target, the inversion is necessary. With the three bolts tightened on the test rig head, the deflection of the rig is about .1 mil under the force required to move adjustments. The rig immediately returns to zero when the force is removed. It is a very solid, very precise, test platform. Each click of movement in the scope adjustments moves the reticle on the target and this can observed by the tester as it actually happens during the test. It’s quite a lot of fun if you are a bit of a nerd like I am. After properly setting the parallax and diopter, I move the elevation adjustment though the range from erector center until it stops, making note every 5 mils of adjustment dialed of any deviation in the position of the reticle on the target relative to where it should be and also making note of the total travel and any excess travel in the elevation knob after the reticle stops moving but before the knob stops. I then reverse the process and go back down to zero. This is done several times to verify consistency with any notes taken of changes. After testing the elevation adjustments in this way, the windage adjustments are tested out to 4 mils each way in similar fashion using the same target and basically the same method. After concluding the testing of adjustments I also test the reticle size calibration. This is done quite easily on this same target by comparing the reticle markings to those on the target. Lastly, this test target has a reticle cant testing function (basically a giant protractor) that I utilize to test reticle cant. This involves the elevation test as described above, a note of how far the reticle deviates horizontally from center during this test, and a little math to calculate the angle described by that amount of horizontal deviation over that degree of vertical travel.

 

Testing a single scope of a given model, from a given manufacturer, which is really all that is feasible, is not meant to be indicative of all scopes from that maker. Accuracy of adjustments, reticle size, and cant will differ from scope to scope. After testing a number of scopes, I have a few theories as to why. As designed on paper, I doubt that any decent scope has flaws resulting in inaccurate clicks in the center of the adjustment range. Similarly, I expect few scopes are designed with inaccurate reticle sizes (and I don’t even know how you would go about designing a canted reticle as the reticle is etched on a round piece of glass and cant simply results from it being rotated incorrectly when positioned). However, ideal designs aside, during scope assembly the lenses are positioned by hand and will be off by this much or that much. This deviation in lens position from design spec can cause the reticle size or adjustment magnitude to be incorrect and, I believe, is the reason for these problems in most scopes. Every scope maker is going to have a maximum acceptable amount of deviation from spec that is acceptable to them and I very much doubt they would be willing to tell you what this number is, or better yet, what the standard of deviation is. The tighter the tolerance, the better from the standpoint of the buyer, but also the longer average time it will take to assemble a scope and, therefore, the higher the cost. Assembly time is a major cost in scope manufacture. It is actually the reason that those S&B 1-8x short dots I lusted over never made it to market. I can tell you from seeing the prototype that they were a good design, but they were also a ridiculously tight tolerance design. In the end, the average time of assembly was such that it did not make sense to bring them to market as they would cost more than it was believed the market would bear. This is a particular concern for scopes that have high magnification ratios and also those that are short in length. Both of these design attributes tend to make assembly very touchy in the tolerance department. This should make you, the buyer, particularly careful to test scopes purchased that have these desirable attributes as manufacturers will face greater pressure on this type of scope to allow looser standards. If you test yours and find it lacking, I expect that you will not have too much difficulty in convincing a maker with a reputation for good customer service to remedy it:  squeaky wheel gets the oil and all that.

 

Before I leave adjustments, reticle size, and reticle cant, I will give you some general trends I have noticed so far. The average adjustment deviation seems to vary on many models with distance from optical center. This is a good endorsement for a 20 MOA base, as it will keep you closer to center. The average deviation  for a scope’s elevation seems to be about .1% at 10 mils. Reticle size deviation is sometimes found to vary with adjustments so that both the reticle and adjustments are off in the same way and with similar magnitude. This makes them agree with each other when it comes to follow up shots. I expect this is caused by the error in lens position effecting both the same. In scopes that have had a reticle with error it has been of this variety, but less scopes have this issue than have adjustments that are off. Reticle size deviation does not appear to vary as you move from erector center. The mean amount of reticle error is about .05%. Reticle cant mean is about .05 degrees. Reticle cant, it should be noted, effects the shooter as a function of calculated drop and can easily get lost in the windage read. As an example, a 1 degree cant equates to about 21cm at 1000 meters with a 168gr .308 load that drops 12.1 mil at that distance. That is a lot of drop and a windage misread of 1 mph is of substantially greater magnitude (more than 34 cm) than our example reticle cant-induced error. This type of calculation should be kept in mind when examining all mechanical and optical deviations in a given scope:  a deviation is really only important if it is of a magnitude similar to the deviations expected to be introduced by they shooter, conditions, rifle, and ammunition.

Testing Methodology:  Comparative Optical Evaluation

The goal of my optical performance evaluation is NOT to attempt to establish some sort of objective ranking system. There are a number of reasons for this. Firstly, it is notoriously difficult to measure optics in an objective and quantifiable way. Tools, such as MTF plots, have been devised for that purpose primarily by the photography business. Use of such tools for measuring rifle scopes is complicated by the fact that scopes do not have any image recording function and therefore a camera must be used in conjunction with the scope. Those who have taken through-the-scope pictures will understand the image to image variance in quality and the ridiculousness of attempting to determine quality of the scope via images so obtained.  Beyond the difficulty of applying objective and quantifiable tools from the photography industry to rifle scopes, additional difficulties are encountered in the duplication of repeatable and meaningful test conditions. Rifle scopes are designed to be used primarily outside, in natural lighting, and over substantial distances. Natural lighting conditions are not amenable to repeat performances. This is especially true if you live in central Ohio, as I do. Without repeatable conditions, analysis tools have no value, as the conditions are a primary factor in the performance of the optic. Lastly, the analysis of any data gathered, even if such meaningful data were gathered, would not be without additional difficulties. It is not immediately obvious which aspects of optical performance, such as resolution, color rendition, contrast, curvature of field, distortion, and chromatic aberration, should be considered of greater or lesser importance. For such analysis to have great value, not only would a ranking of optical aspects be in order, but a compelling and decisive formula would have to be devised to quantitatively weigh the relative merits of the different aspects. Suffice it to say, I have neither the desire, nor the resources, to embark on such a multi-million dollar project and, further, I expect it would be a failure anyway as, in the end, no agreement will be reached on the relative weights of different factors in analysis.

 

The goal of my optical performance evaluation is instead to help the reader get a sense of the personality of a particular optic. Much of the testing documents the particular impressions each optic makes on the tester. An example of this might be a scope with a particularly poor eyebox behind which the user notices he just can’t seem to get to a point where the whole image is clear. Likewise, a scope might jump out to the tester as having a very bad chromatic aberration problem that makes it difficult to see things clearly as everything is fringed with odd colors. Often these personality quirks mean more to the users experience than any particular magnitude of resolution number would. My testing seeks to document the experience of using a particular scope in such a way that the reader will form an impression similar to that of the tester with regard to like or dislike and the reasons for that.

 

The central technique utilized for this testing is comparative observation. One of the test heads designed for my testing apparatus consists of five V-blocks of which four are adjustable.  This allows each of the four scopes on the adjustable blocks to be aimed such that they are collinear with the fifth. For the majority of the testing each scope is then set to the same power (the highest power shared by all as a rule). Though power numbers are by no means accurately marked, an approximation will be obtained. Each scope will have the diopter individually adjusted by the tester. A variety of targets, including both natural backdrops and optical test targets, will be observed through the plurality of optics with the parallax being adjusted for each optic at each target. A variety of lighting conditions over a variety of days will be utilized. The observations through all of these sessions will be combined in the way that the tester best believes conveys his opinion of the optics performance and explains the reasons why.

 

A variety of optical test targets viewed through the Leupold Mark 6 3-18x44

A variety of optical test targets viewed through the Leupold Mark 6 3-18×44