Saturday, February 16, 2013

To get products into more hands, Google will open its own stores by the end of the year

An extremely reliable source has confirmed to us that Google is in the process of building stand-alone retail stores in the U.S. and hopes to have the first flagship Google Stores open for the holidays in major metropolitan areas.

The mission of the stores is to get new Google Nexus, Chrome, and especially upcoming products into the hands of prospective customers. Google feels right now that many potential customers need to get hands-on experience with its products before they are willing to purchase. Google competitors Apple and Microsoft both have retail outlets where customers can try before they buy. Google’s retail move won’t be an entirely new area, however.
Google Chrome pop-up stores
Google currently has Chrome Store-within-a-store models in hundreds of Best Buys in the U.S. and 50 PCWorld/Dixon’s in the U.K. These stores have Google trained employees who demonstrate the value of Chromebooks and can answer the multitude of questions people have before making a purchase. Our source told us the new Google Stores would be a much broader play. The Chrome SIS employees don’t have sales targets, and they are there mostly for educating. Best Buy and Dixen’s also handle product and monetary transactions, not Google.
Google and Virgin also ran a limited test run of Kiosks in five major Airports, including this one at SFO  (Image Scott Beale)
My understanding is that these new stores will operate independently and make direct sales to customers from Google like the Nexus online store does currently. It might also make sense for Google to sell its apparel and other Google-branded merchandise in these stores as well, but that’s speculation on my part.

The decision to open stores, I’m told, came when drawing up plans to take the Google Glass to the public. The leadership thought consumers would need to try Google Glass first hand to make a purchase. Without being able to use them first hand, few non-techies would be interested in buying Google’s glasses (which will retail from between $500 to $1,000). From there, the decision to sell other Google-branded products made sense.
Along with Glass, Google will have an opportunity to demonstrate other upcoming and Google X projects like driverless cars and mini-drone delivery systems at its stores.
There are small bits of anecdotal evidence that Google is looking into retail. It is hiring folks to develop Point of Sale systems, for instance. We’re told, however, that most of the ramping up of these stores will be done by an outside agency.

Recently, Apple CEO Tim Cook told analysts that Apple Stores were more than just stores, they were the face of the company.
I don’t think we would have been nearly as successful with iPad if it weren’t for our stores. It gives Apple an incredible competitive advantage. Others have found out it’s not so easy to replicate. We’re going to continue to invest like crazy. The average store last year was over 50 million in revenue. 
Google may now understand that if it wants to roll out a whole new product category like Google Glass, it is going to have to dive into Retail.

Facebook Hacked, Claims “No Evidence of User Data Compromised”


Facebook announced on Friday that it had been the target of a series of attacks from an unidentified hacker group, which resulted in the installation of malicious software onto Facebook employee laptops.
“Last month, Facebook security discovered that our systems had been targeted in a sophisticated attack,” the company said in a blog post. “The attack occurred when a handful of employees visited a mobile developer website that was compromised.”

Facebook says that these employees then had malware installed on their laptops as a result of their visiting the web site. The hack used what is called a “zero-day Java exploit,” a well-known vulnerability in Oracle’s software which has gained much attention in recent months. Essentially, anyone visiting a website using this attack who also has Oracle’s Java enabled in their browser was vulnerable. As a result, hackers inserted malware onto the laptops of multiple Facebook employees.
“As soon as we discovered the presence of malware, we remediated all infected machines, informed law enforcement, and began a significant investigation that continues to this day,” the post read.

In the company’s post, Facebook notes that it had “found no evidence that Facebook user data was compromised.”
Facebook did not say what the hackers did have access to, however, after the installation of said malware.
Facebook’s announcement comes on the heels of a string of recent attacks on other major Web sites. Twitter, the microblogging social network that hosts more than 200 million active users on its service, announced it had been hacked two weeks ago, and that upwards of 250,000 user accounts may have been compromised as a result.

Other targets have included the Washington Post, The New York Times and the Wall Street Journal, all of which have said they believe that the Chinese government was somehow involved in their system infiltration.
But both Facebook and Twitter, in their respective blog posts, make no direct comparison or accusation to the hacks made on the Times, the Journal or the Post.
Facebook declined to comment when asked if the company suspected the Chinese government was involved.
Something to note, however; Facebook directly points to the zero-day exploit, which takes advantage of Oracle’s Java vulnerability, as the root cause of the attack. While Twitter did not detail the exact methods of how its systems were infiltrated, Twitterdirector of information security Bob Lord reminded users that security experts strongly recommend turning off the problematic Java inside of their browsers.

That could suggest that the two attacks were connected, though neither company says as much outright. But both Facebook and Twitter included language in their posts that their respective companies were part of a larger series of attacks on multiple companies over the past few months.
Twitter did not immediately respond to a request for comment.
“Facebook was not alone in the attack. It is clear that others were attacked and infiltrated recently as well,” the company’s post says. “As one of the first companies to discover this malware, we immediately took steps to start sharing details about the infiltration with the other companies and entities that were affected. We plan to continue collaborating on this incident through an informal working group and other means.”

Friday, February 15, 2013

Dropbox for iOS App Gains Push Notifications for Shared Folders, New PDF Viewer

Dropbox has launched a new version of its iOS app, providing push notifications of shared folders for the first time. Previously, when waiting for someone to share a folder with you, you had to repeatedly open and close the app to check for its availability. 
The feature is likely to be most appreciated by business users, as Dropbox file-sharing has become a very popular way to work around the size limits of email attachments when distributing large presentations. 

Dropbox 2.1 also adds better support for PDF viewing, displaying multiple pages on a screen, and file-sorting by date-modified, another feature of greatest value to business users who often deal with frequently-updated documents where it is vital to be working with the latest version. 

Dropbox is a free download from the App Store.

iOS 6.1.2 to Address Exchange and Passcode Bugs Reportedly Coming Early Next Week

German site iPhone-Ticker reports [Google translation] that Apple is planning to release iOS 6.1.2 early next week to address both the Exchange bug and lock screen passcode issue affecting iOS 6.1 users. According to the report, iOS 6.1.2 is likely to arrive before Wednesday, February 20. 
Like iOS 6.1.1 released for the iPhone 4S earlier this week, iOS 6.1.2 will be a limited update addressing only these issues in order to allow Apple to quickly release it to the public. Apple last week seeded to developers an initial iOS 6.1.1 beta including broader changes such as improvements to Maps in Japan, but it now appears that this release will become iOS 6.1.3 as Apple addresses a few high-priority bug fixes on a separate basis.

Thursday, February 14, 2013

Surface Pro knocked for low repairability by iFixit

The firm gives the Surface Pro a one out of 10 rating, noting that there is a high risk of destroying the tablet just by opening it.
Microsoft Surface Pro interior
iFixit beckons you to take a tour of the innards of Microsoft's Surface Pro tablet.
(Credit: iFixit)
Don't try to repair Microsoft's Surface Pro tablet yourself.
That's the advice from the folks at iFixit, which rated the Surface Pro a mere 1 on a 10-point scale of repairability, with 10 being the easiest to repair.

The firm found that there are more than 90 screws in the device, and that there was a high risk of cutting a crucial wire just by opening the tablet, potentially destroying it. Likewise, the display assembly is also extremely difficult to remove and replace. There's also a lot of adhesive used in the tablet.
"Unless you perform the opening procedure 100% correctly, chances are you'll shear one of the four cables surrounding the display perimeter," iFixit said.
If you manage to get the Surface Pro opened, the solid-state drive and the battery are both removable.
Surface Pro is Microsoft's attempt to bridge the PC and tablet worlds, and is a showcase for its Windows 8 operating system. The company is hoping big businesses will consider its tablet, which can run legacy Windows programs, over the Apple's iPad, which recently got a 128GB version to better compete in the enterprise segment.
iFixit's teardown of the Surface Pro
(Credit: Screenshot taken by Roger Cheng/CNET)

Camera megapixels: Why more isn't always better (Smartphones Unlocked)

A 16-megapixel smartphone camera sounds great, but an 8-megapixel shooter could still produce better pics.

Increasingly, the 8-megapixel smartphones camera standard you thought you knew will ratchet up to 13 megapixels for high-end phones.
In many products -- like this past January's Pantech Discover (12.6 megapixels), last October's LG Optimus G for Sprint (13 megapixels), and even last year's HTC Titan II (16 megapixels) -- we're already there.
And no, I won't forget to mention last February's Nokia 808 PureView, a 41-megapixel Mobile World Congress 2012 stunner that CNET camera editor Josh Goldman says is worth the hype.
Yet even though the technology exists, and some of it is even good, most best-selling flagship phones are, for now anyway, sticking to 8 megapixels -- like the Samsung Galaxy S3, the HTC Droid DNA, the BlackBerry Z10, and the iPhone 5. (The Nokia Lumia 920 nudges its sensor up to 8.7 megapixels.)
Shootout!: BlackBerry Z10 versus iPhone 5 versus Samsung Galaxy S3
In this lies the cautionary reminder (that photography nuts will tell you), that it's possible for an excellent 5-megapixel camera to produce photos you prefer over a shoddy 12-megapixel camera. The megapixel number alone is no guarantee of heightened photographic performance.
Instead, the formula for fantastic photos comes down to the entire camera module that includes the size and material of the main camera lens, the light sensor, the image processing hardware, and the software that ties it all together.
NoteAs always with this column, if you already consider yourself an expert, then this article is probably not for you.

Key ingredient No. 1: The sensor

Most budding and professional photographers will tell you that the most important ingredient in the optical system is the sensor, because that's the part that captures the light. The sensor is essentially the "film" material of a digital camera. No light, no photo.
Light enters through the camera lens, then passes to the camera sensor, which receives the information and translates it into an electronic signal. From there, the image processor creates the image and fine-tunes it to correct for a typical set of photographic flaws, like noise.
The size of the image sensor is extremely important. In general, the larger the sensor, the larger your pixels, and the larger the pixels, the more light you can collect. The more light you can catch, the better you image can be.
Shot with a Nokia 808 PureView
Shot with the Nokia 808 PureView.
(Credit: Nokia)
The experts I spoke with for this story had colorful ways of describing the relationship between pixels and sensors, but "buckets of water" or "wells" were a favorite (intentionally oversimplified) analogy.
Imagine you have buckets (pixels) laid out on a blacktop (sensor). You want to collect the most water in those buckets as possible. To extend the water-and-bucket analogy, the larger the sensor you have (blacktop), the larger the pixels (buckets) you can put onto it, and the more water (light) you can collect.

Larger sensors are the reason that 8 megapixels from a digital SLR camera best those 8 megapixels from a smartphone camera. You get roughly the same number of pixels, but those pixels on the DSLR get to be larger, and therefore let in more light. More light (generally) equals less-noisy images and greater dynamic range.

The fallacy of megapixels

You can start to see that cramming more pixels onto a sensor may not be the best way to increase pixel resolution. That hasn't stopped the cell phone industry from doing just that.
Jon Erensen, a Gartner analyst who has covered camera sensors, remembers when we collectively made the leap from 1-megapixel to 2-megapixel sensors.
"They would make the pixel sizes smaller [to fit in more pixels]," Erensen told me over the phone, "but keep the image sensor the same."

"What ended up happening is that the light would go into the well [the "bucket"] and hit the photo-sensitive part of the image sensor, capturing the light. So if you make the wells smaller, the light has a harder time getting to the photo-sensitive part of the sensor. In the end, increased resolution wasn't worth very much. Noise increased."
The relationship between the number of pixels and the physical size of the sensor is why some 8-megapixel cameras can outperform some 12-, 13-, or even 16-megapixel smartphone cameras.
There's more involved, too. A slim smartphone limits the sensor size for one, and moving up the megapixel ladder without increasing the sensor size can degrade the photo quality by letting in less light than you could get with slightly fewer megapixels.

Then again, drastically shrunken pixel sizes aren't always the case when you increase your megapixels. HTC's Bjorn Kilburn, vice president of portfolio strategy, shared that the pixel size on the 16-megapixel Titan II measures 1.12 microns, whereas each of the One X's 8 pixels measures a slightly larger 1.4 microns.
As a result, the photo quality on both these HTC smartphones should be comparable at a pixel-by-pixel level.
Unfortunately, most smartphone-makers don't share granular detail about their camera components and sensor size, so until we test them, the quality is largely up in the air. Even if smartphone makers did release the details, I'm not sure how scrutable those specs would be to the majority of smartphone shoppers.

For more information on the interplay between megapixels and sensors, check out the excellent description in CNET's digital camera buying guide.

What about Nokia's 41-megapixel PureView?

Nokia's story behind its 808 PureView smartphone is really interesting. CNET Senior Editor Josh Goldman has written one of the best explanations of the Nokia 808 Pureview's 41-megapixel camera that I've seen. I strongly suggest you read it.
In the meantime, here's a short summary of what's going on.

Juha Alakarhu (YOO-hah), is head of camera technologies at Nokia, where he works within the Smart Devices team. Alakarhu explained to me that although Nokia has engineered the 808 to capture up to 41 megapixels, most users will view photos as the 5-megapixels default.
Usually, when you use the digital zoom on your phone, you're blowing up and cropping in on an image to see each pixel up close. You all know what that can look like: grainy, blocky, and not always as sharply focused or as colorful as you'd like.
LG Optimus G
Sprint's version of the LG Optimus G shoots 13-megapixel images.
(Credit: Josh Miller/CNET)
In the 808 PureView, Nokia uses a process called "oversampling," which -- for the 808's 5-megapixel default resolution -- condenses the information captured in seven pixels into one (they call it a "superpixel.") If you zoom in on an object, you're simply seeing part of the image that's already there, rather than scaling up. This method should translate to higher-resolution digital print-outs and zoom-ins than you'd normally see.

It's taken over five years to create the technology within the 808 PureView, Nokia's Alakarhu said. Not only does the 808 lean on the physical size of the sensor (specifically 1/1.2-inch), there are also custom algorithms on top of the sensor to adjust the image to reduce imperfections like noise. It's this set of instructions is what Nokia terms PureView, not the sensor size alone.
As CNET's Goldman has pointed out, this is an unusually large sensor for a smartphone, and it's also larger than sensors found on the vast majority of point-and-shoot cameras.

Key ingredient No. 2: Image processing

In addition to the size and quality of the lens and sensor, there's also the image processor. Most modern high-end smartphone CPUs have dedicated graphics processors built into their chip, which, being hardware-accelerated and not just software-dependent, can quickly render images like photos, videos, and games without overtaxing the main application processor.

At last year's Mobile World Congress, HTC touted a discrete image processor for its HTC One family of phones, called the HTC ImageChip, that is capable of continuous pictures at a rate of 0.7 second between shots.
The chip, which lives in the HTC One XOne X+One VOne S, is significant in providing a unified level of photo performance between the four models, whose other features differ quite a bit.
The separate processor also explains how HTC can claim those shot-to-shot times on all four phones in the family.

I promised that there was software bridging the hardware and the final image, and there is. Algorithms and other logic are what create the final image output on the phone's screen. This where the most subjective element of photography comes in -- how your eye interprets the quality of color, the photo's sharpness, and so on.

The image processor is also what helps achieve zero shutter lag, when the camera captures the photo when you press the capture button, not a beat or two after.

Wait, there's more

There's much more to know about the competing technology that goes into sensors, but backside-illuminated sensors are starting to be used much more in smartphones.
This type of sensor is often synonymous with better low-light performance because it increases photosensitivity. However, if you shoot in bright light, it can also blow out your image. Here are more details on how backside illumination works.
iPhone 5
The iPhone 5 has fewer controls, but great image processing.
(Credit: CNET)
The camera's sensor size and image processor may be the most crucial elements for creating quality smartphone photos, but other considerations come into play. Higher quality components, for example, can help tease out better photos, but they could also cost more, which could lead to a marginally pricier camera.

While the total cost of a camera module is only one part of the total cost, Gartner analyst Jon Erensen said that high-end parts could double the price of a basic camera set, and thought that parts could cost $15 per phone. The smartphone makers I contacted for this article, like Samsung and Nokia, wouldn't share sourcing or pricing information.

Usability is king

It's quickly becoming a well-worn adage that the best camera is the one you have on you.
Despite the intense engineering focus that goes into the camera's physical elements, it's hard to overstress the importance of both convenience and the total customer experience. How easy it is to open the camera app from a locked position, how quickly photos capture, and how desired the special effects and shooting modes are all add up to a camera you want to use versus one you don't.

Increasingly, some phone-makers, like HTC and Samsung, include extra logic in their big-ticket phones, like detecting smiles and selecting the best group photo of a bunch.
For most phone owners, said Drew Blackard, Samsung's senior manager of product planning, being able to quickly and easily share photos on the fly is far more important than pixel count. Just look at Twitter and Instagram's runaway success in sharing simple, small photos.
Blackberry Z10, Samsung Galaxy S3, iPhone 5
The Blackberry Z10, Samsung Galaxy S3, and iPhone 5 capture the moment in 8 megapixels.
(Credit: Josh Miller/CNET)
Gartner analyst Jon Erensen agrees. "What do you actually gain from going higher than you need, in a practical sense?," he said, adding that most people upload smartphone photos to online albums, or e-mail them to family and friends, formats that require many fewer than 8 megapixels, or even 5.

A recent trip to Indonesia illustrates what Nokia's Alakarhu and the others mean by the whole experience taking precedent over the specs. While trekking with 22 pounds of gear on his back -- including a high-quality DSLR -- Alakarhu repeatedly reached for the Nokia 808 PureView he kept in his pocket.

Although he considers himself an amateur photographer who will put in the time to frame a great shot, Alakarhu said he found himself using the PureView more because of its easy availability and quick start time when he didn't want to take the time to set up a more involved shot on his digital camera.

I have my share of similar stories, and I suspect that you do, too.
We definitely shouldn't scrap pixel count when weighing smartphone camera specs against others, but when it comes to all the hardware and software that create a great photo, the megapixel count alone just isn't enough. It's time we shift the focus somewhere else -- like maybe to that undersung sensor.

A keyboard that rises up from flat touch screens

A startup creates a physical keyboard for touch-screen devices, like smartphones or tablets, that appears when you need to type and disappears when you're done. CNET's Sumi Das tries it out.

A few weeks ago, right before the new BlackBerry 10 phones were announced, I dragged a cameraman to San Francisco's Financial District during lunch hour and asked random strangers to name BlackBerry's best feature. Care to guess what the results of my highly unscientific poll were? Even iPhone and Android users agreed -- the famed keyboard is BlackBerry's top trait.
Increasingly, we "mobile device addicts" are favoring our smartphones and tablets over our traditional computers to meet our digital demands. Trouble is, a lot of us still despise typing on these beloved touch-screen devices. One Silicon Valley startup has created a new kind of keyboard that could help reduce typos and other fat-fingered mistakes.
Fremont, Calif.-based, Tactus Technology uses microfluidics to make physical keys bubble up from the surface of a touch screen when you need to type and disappear, when you don't. Microfluidics may sound foreign, but if you've operated an inkjet printer you've used the technology.
So how do keys appear out of nowhere? It starts with a panel that has channels built into it. The channels are filled with a non-toxic fluid. By increasing the pressure in the channels, the fluid pushes up the surface of the panel, creating an actual key. What's more, Tactus says the pressure will be adjustable, so the keys could feel a bit squishy, like a gel pack or they could be harder, like the plastic keys on a laptop.
Tactus demo-ed a working prototype for us, but they're still refining the technology. Right now, there's an audible noise when the keys appear. It should be silent in the final version. And the surface has to be rugged. You wouldn't want to spring a leak, after all. Durability tests are part of that process since Tactus needs to guarantee the surface can't be punctured by a newly manicured fingernail or a 3-year-old trying to scribble on your smartphone with a pen.
Currently, the technology is limited in that it's a fixed single array. You wouldn't be able to use the Tactus keyboard in both portrait and landscape mode, for example. But the goal is to make the third generation of the product dynamic. "The vision that we had was not just to have a keyboard or a button technology, but really to make a fully dynamic surface," says cofounder Micah Yairi, "So you can envision the entire surface being able to raise and lower depending on what the application is that's driving it." Meaning it could display a keyboard when you're typing an e-mail, a number pad when you're dialing a phone number, and perhaps letter tiles when you're playing Words With Friends.
Tactus says it wants to be in production by the end of 2013 or beginning of 2014. Executives were mum about which companies they're talking to. Just one partnership has been announced to date, with Touch Revolution, a Bay Area company that makes touch displays. Tactus VP Nate Saal says, "There are more and more touch screens being integrated in devices... from your mobile phone, cell phone, into refrigerators and appliances and I think those are all opportunities for Tactus to really improve the interface and usability of those devices."
Tactus took it's prototype to CES in January. Among the attendees who tried out the technology was a man who was visually impaired. His reaction upon feeling the keys under his fingers? "Amazing."

Apple looks to end blurry iPhone photos with new invention

In a patent filing discovered on Thursday, Apple describes a digital camera implementation that continuously captures and stores images in a buffer until the user releases the shutter, at which time the system automatically selects the best picture based on a number of predetermined variables.
Continuous Imaging

Source: USPTO

Filed with the U.S. Patent and Trademark Office in October of 2012, Apple's "Image capturing device having continuous image capture" offers owners of small, portable devices more leeway when trying to get the perfect shot. 

While smartphones like the iPhone have relatively high-quality camera systems, the products are not purpose-built for picture taking and come with a multitude of compromises. For example, a smartphone's optics and imaging sensor are minuscule compared to modern equivalents seen in full-size DSLRs and pocketable point-and-shoots. The lack of a powerful image processor and other vital components just add to the challenge of getting high quality photographs from a handset's camera. 

From the patent filing's background:
These image capturing devices typically use a preview resolution for capturing a preview image of a scene. Subsequently, a user provides an input to take a photograph. The device switches from preview resolution to full resolution prior to capturing an image. Switching from preview to full resolution causes a time lag, which may lead to user frustration. Also, camera shake during the time when a user presses a button or touches a touchscreen can degrade image quality.

The iPhone 5, for example, offers a preview image not quite at full resolution. This allows for fast screen refresh times that give a better overall user experience by simulating a "live" environment. Preview quality is most noticeable when zooming in on a subject, when the image becomes pixelated and sometimes blurry. 

Apple's system starts up when a user launches a photo app like Camera, continuously capturing and storing sequential full-resolution images to a buffer. When a request is given (shutter press or screen touch), the system pulls from the pool and chooses one image based on when it was captured, its quality, or a combination of the two. 

Depending on the quality of the image, the processing logic can select the photo from either the buffer or concurrent to when the shutter is pressed. The system uses a "focus score" to based on contrast, image resolution, dynamic range and color rendering properties. By weighting the scores of tagged images, along with factoring in exposure time, the logic can choose which photo to use. Memory is conserved by purging the buffer at a predetermined time, or when capacity reaches a certain threshold. 
Camera Flowchart

Example flowchart of processing logic.

In one embodiment, the selected picture can be displayed on screen in full resolution immediately after a request as confirmation for the user.

It is not clear if this exact technology is being implemented in iOS and devices like the iPhone, iPad and iPod touch, but some aspects of the invention can be seen in Apple's latest products. 

The patent application was first filed for in October of 2012 as a division to another co-pending filing from 2009, and lists Ralph Brunner, Nikhil Bhogal and James David Batson as its inventors.