• DreamHost


  • » Currently browsing posts tagged with: CR

    There’s Yet Another Rant About Apple and Mac Users

    June 11th, 2018

    Over the years, some tech pundits have decided that Apple really needs to drop the Mac. To them, it has outlived its usefulness and, besides, far more money is made from selling iPhones.

    But it’s a good source of hit bait to claim that “Mac users don’t really matter to Apple.”

    Indeed, Apple has, at times, made it seem as if that claim was accurate. The Mac mini has not been refreshed since 2014. After releasing a total redesign for the Mac Pro in late 2013, Apple appeared to drop the ball and mostly abandoned that model.

    When a new MacBook Pro was launched in late 2016, some thought the claim that it was a professional notebook was a huge exaggeration. It was thinner, in the spirit of recent Apple gear, but the highly touted Touch Bar, powered by an ARM system-on-a-chip, was thought to be fluff and not much else.

    Apple also got dinged for things it had never done, such as supplying a model with 32GB of RAM. But that would have required using a different memory controller that might have impacted performance and battery life. In comparison, most PC notebooks were also limited to 16GB. A future Intel CPU update will offer an integrated memory controller that doubles memory capacity.

    Just after Christmas, a Consumer Reports review failed to recommend the 2016 MacBook Pro supposedly due to inconsistent battery life. After Apple got involved, it turned out that CR’s peculiar testing scheme, which involves disabling the browser cache, triggered a rare bug. After Apple fixed it, a retest earned the MacBook Pro an unqualified recommendation.

    Was all this proof that Apple just didn’t care about Macs?

    Well, it’s a sure thing the Touch Bar wasn’t cheap to develop, and embedding an ARM chip in a Mac is definitely innovative. But Apple’s priorities appeared to have gone askew, as the company admitted during a small press roundtable in early 2017.

    The executive team made apologies for taking the Mac Pro in the wrong direction, and promised that a new model with modular capabilities was under development, but it wouldn’t ship right away. There would, however, be a new version of the iMac with professional capabilities. VP Philip Schiller spoke briefly about loving the Mac mini, but quickly changed the subject.

    Before the 2017 WWDC, I thought that Apple would merely offer more professional parts for customized 27-inch 5K iMacs. But such components as Intel Xeon-W CPUs and ECC memory would exceed that model’s resource threshold. So Apple extensively redesigned the cooling system to support workstation-grade parts.

    The 2017 iMac Pro costs $4,999 and up, the most expensive, and most powerful, iMac ever. You can only upgrade RAM, but it’s a dealer only installation since it requires taking the unit completely apart, unlike the regular large iMac, where memory upgrades are a snap.

    Apple promised that a new Mac Pro, which would meet the requirements of pros who want a box that’s easy to configure and upgrade, would appear in 2019, so maybe it’ll be demonstrated at a fall event where new Macs are expected.

    But Apple surely wouldn’t have made the commitment to expensive Macs if it didn’t take the platform — and Mac users — seriously. The iMac Pro itself represents a significant development in all-in-one personal computers.

    Don’t forget that the Mac, while dwarfed by the iPhone, still represents a major business for Apple. Mac market share is at its highest levels in years in a declining PC market, serving tens of millions of loyal users. When you want to develop an app for iOS, tvOS or watchOS, it has to be done on a Mac. That isn’t going to change. In addition, Apple is porting several iOS apps for macOS Mojave, and developers will have the tools to do the same next year.

    According to software head Craig Federighi, iOS and macOS won’t merge and the Mac will not support touchscreens.

    Sure, the Mac may play second fiddle to the iPhone, but that doesn’t diminish the company’s commitment to the platform. But it’s still easy for fear-mongering tech pundits to say otherwise, perhaps indirectly suggesting you shouldn’t buy a Mac because it will never be upgraded, or that upgrades will be half-hearted.

    Perhaps there’s an ulterior motive behind some of those complaints; they are designed to discourage people from buying Macs and pushing them towards the latest PC boxes that, by and large, look the same as the previous PC boxes with some upgraded parts.

    But since Intel has run late with recent CPU upgrades, Apple has often been forced to wait for the right components before refreshing Macs. That doesn’t excuse the way the Mac mini and the MacBook Air have been ignored, but I’ll cut Apple some slack with the Mac Pro, since a major update has been promised for next year.

    Now this doesn’t mean the Mac isn’t going to undergo major changes in the coming years. Maybe Apple is becoming disgusted with Intel’s growing problems in upgrading its CPUs, and will move to ARM. Maybe not. But that’s then, this is now.

    Share


    Consumer Reports’ Product Testing Shortcomings: Part Two

    May 24th, 2018

    In yesterday’s column, I expressed my deep concerns about elements of Consumer Reports’ testing process. It was based on an article from AppleInsider. I eagerly awaited part two, hoping that there would be at least some commentary about the clear shortcomings in the way the magazine evaluates tech gear.

    I also mentioned two apparent editorial glitches I noticed, in which product descriptions and recommendations contained incorrect information. These mistakes were obvious with just casual reading, not careful review. Clearly CR needs to beef up its editorial review process. A publication with its pretensions needs to demonstrate a higher level of accuracy.

    Unfortunately, AppleInsider clearly didn’t catch the poor methodology used to evaluate speaker systems. As you recall, they use a small room, and crowd the tested units together without consideration of placement, or the impact of vibrations and reflections. The speakers should be separated, perhaps by a few feet, and the tests should be blind, so that the listeners aren’t prejudiced by the look or expectations for a particular model.

    CR’s editors claim not to be influenced by appearance, but they are not immune to the effects of human psychology, and the factors that might cause them to give one product a better review than another. Consider, for example, the second part of a blind test, which is level matching. All things being equal, a system a tiny bit louder (a fraction of a dB) might seem to sound better.

    I don’t need to explain why.

    Also, I was shocked that CR’s speaker test panel usually consists of just two people with some sort of unspecified training so they “know” what loudspeakers should sound like. A third person is only brought in if there’s a tie. Indeed calling this a test panel, rather than a couple of testers or a test duo or trio, is downright misleading.

    Besides, such a small sampling doesn’t consider the subjective nature of evaluating loudspeakers. People hear things differently, people have different expectations and preferences. All things being equal, even with blind tests and level matching, a sampling of two or three is still not large enough to get a consensus. A large enough listening panel, with enough participants to reveal a trend, might, but the lack of scientific controls from a magazine that touts accuracy and reliability is very troubling.

    I realize AppleInsider’s reporters, though clearly concerned about the notebook tests, were probably untutored about the way the loudspeakers were evaluated, and the serious flaws that make the results essentially useless.

    Sure, it’s very possible that the smart speakers from Google and Sonos are, in the end, superior to the HomePod. Maybe a proper test with a large enough listener panel and proper setup would reveal such a result. So far as I’m concerned, however, CR’s test process is essentially useless on any system other than those with extreme audio defects, such as excessive bass or treble

    I also wonder just how large and well equipped the other testing departments are. Remember that magazine editorial departments are usually quite small. The consumer publications I wrote for had a handful of people on staff, and mostly relied on freelancers. Having a full-time staff is expensive. Remember that CR carries no ads. Income is mostly from magazine sales, plus the sale of extra publications and services, such as a car pricing service, and reader donations. In addition, CR requires a multimillion dollar budget to buy thousands of products at retail every year.

    Sure, cars will be sold off after use, but even then there is a huge loss due to depreciation. Do they sell their used tech gear and appliances via eBay? Or donate to Goodwill?

    Past the pathetic loudspeaker test process, we have their lame notebook battery tests. The excuse for why they turn off browser caching doesn’t wash. To provide an accurate picture of what sort of battery life consumers should expect under normal use, they should perform tests that don’t require activating obscure menus and/or features that only web developers might use.

    After all, people who buy personal computers will very likely wonder why they aren’t getting the battery life CR achieved. They can’t! At the end of the day, Apple’s tests of MacBook and MacBook Pro battery life, as explained in the fine print at its site, are more representative of what you might achieve. No, not for everyone, but certainly if you follow the steps listed, which do represent reasonable, if not complete, use cases.

    It’s unfortunate that CR has no competition. It’s the only consumer testing magazine in the U.S. that carries no ads, is run by a non-profit corporation, and buys all of the products it tests anonymously via regular retail channels. Its setup conveys the veneer of being incorruptible, and thus more accurate than the tests from other publications.

    It does seem, from the AppleInsider story, that the magazine is sincere about its work, though perhaps somewhat full of itself. If it is truly honest about perfecting its testing processes, however, perhaps it should reach out to professionals in the industries that it covers and refine its methodology. How CR evaluates notebooks and speaker systems raises plenty of cause for concern.

    Share


    Some Troubling Information About Consumer Reports’ Product Testing

    May 23rd, 2018

    AppleInsider got the motherlode. After several years of back and forth debates about its testing procedures, Consumer Reports magazine invited the online publication to tour their facilities in New York. On the surface, you’d think the editorial stuff would be putting on their best face to get favorable coverage.

    And maybe they will. AppleInsider has only published the first part of the story, and there are apt to be far more revelations about CR’s test facilities and the potential shortcomings in the next part.

    Now we all know about the concerns: CR finds problems, or potential problems, with Apple gear. Sometimes the story never changes, sometimes it does. But the entire test process may be a matter of concern.

    Let’s take the recent review that pits Apple’s HomePod against a high-end Google Home Max, which sells for $400 and the Sonos One. In this comparison, “Overall the sound of the HomePod was a bit muddy compared with what the Sonos One and Google Home Max delivered.”

    All right, CR is entitled to its preferences and its test procedures, but lets take a brief look at what AppleInsider reveals about them.

    So we all know CR claims to have a test panel that listens to speakers set up in a special room that, from the front at least, comes across as a crowded audio dealer with loads of gear stacked up one against another. Is that the ideal setup for a speaker system that’s designed to adapt itself to a listening room?

    Well, it appears that the vaunted CR tests are little better than what an ordinary subjective high-end audio magazine does, despite the pretensions. The listening room, for example, is small with a couch, and no indication of any special setup in terms of carpeting or wall treatment. Or is it meant to represent a typical listening room? Unfortunately, the article isn’t specific enough about such matters.

    What is clear is that the speakers, the ones being tested and those used for reference, are placed in the open adjacent to one another. There’s no attempt to isolate the speakers to prevent unwanted reflections or vibrations.

    Worse, no attempt is made to perform a blind test, so that a speaker’s brand name, appearance or other factors doesn’t influence a listener’s subjective opinion. For example, a large speaker may seem to sound better than a small one, but not necessarily because of its sonic character. The possibility of prejudice, even unconscious, against one speaker or another, is not considered.

    But what about the listening panel? Are there dozens of people taking turns to give the speakers thorough tests? Not quite. The setup involves a chief speaker tester, one Elias Arias, and one other tester. In other words, the panel consists of just two people, a testing duo, supposedly specially trained as skilled listeners in an unspecified manner, with a third brought in in the event of a tie. But no amount of training can compensate for the lack of blind testing.

    Wouldn’t it be illuminating if the winning speaker still won if you couldn’t identify it? More likely, the results might be very different.  But CR often appears to live in a bubble.

    Speakers are measured in a soundproof room (anechoic chamber). The results reveal a speaker’s raw potential, but it doesn’t provide data as to how it behaves in a normal listening room, where reflections will impact the sound that you hear. Experienced audio testers may also perform the same measurements in the actual listening location, so you can see how a real world set of numbers compares to what the listener actually hears.

    That comparison with the ones from the anechoic chamber might also provide an indication how the listening area impacts those measurements.

    Now none of this means that the HomePod would have seemed less “muddy” if the tests were done blind, or if the systems were isolated from one another to avoid sympathetic vibrations and other side effects. It might have sounded worse, the same, or the results might have been reversed. I also wonder if CR ever bothered to consult with actual loudspeaker designers, such as my old friend Bob Carver, to determine the most accurate testing methods.

    It sure seems that CR comes up with peculiar ways to evaluate products. Consider tests of notebook computers, where they run web sites from a server in the default browser with cache off to test battery life. How does that approach possibly represent how people will use these notebooks in the real world?

    At least CR claims to stay in touch with manufacturers during the test process, so they can be consulted in the event of a problem. That approach succeeded when a preliminary review of the 2016 MacBook Pro revealed inconsistent battery results. It was strictly the result of that outrageous test process.

    So turning off caching in Safari’s usually hidden Develop menu revealed a subtle bug that Apple fixed with a software update. Suddenly a bad review become a very positive review.

    Now I am not going to turn this article into a blanket condemnation of Consumer Reports. I hope there will be more details about testing schemes in the next part, so the flaws —  and the potential benefits — will be revealed.

    In passing, I do hope CR’s lapses are mostly in the tech arena. But I also know that their review of my low-end VW claimed the front bucket seats had poor side bolstering. That turned out to be totally untrue.

    CR’s review of the VIZIO M55-E0 “home theater display” mislabeled the names of the setup menu’s features in its recommendations for optimal picture settings. It also claimed that no printed manual was supplied with the set; this is half true. You do receive two Quick Start Guides in multiple languages. In its favor, most of the picture settings actually deliver decent results.

    Share


    Newsletter Issue #951: Recent Apple Gear Inspires the Critics

    February 19th, 2018

    The biggest issues with the media’s response about new Apple gear isn’t just Consumer Reports. True, the publications seems to have a penchant for inserting itself into the debate whenever something from Apple isn’t working as it’s supposed to do. The publication’s marketing team evidently realizes that any bad news about the company will get loads of hits.

    So when the 2016 MacBook Pro delivered questionable battery life results, you can be sure that CR was ready to not recommend it in a preliminary review. But how many personal computers are even granted preliminary reviews?

    It turned out that, yes, the problem was due to an obscure Apple bug. But it was only triggered when Safari was used in a special mode that was primarily meant for web developers. How that was supposed to represent an honest appraisal of its real battery life escapes me. Even when the problem was fixed, the results were still pretty funky compared to what other publications measured. So CR appears to reside in its own reality too.

    Continue Reading…

    Share