Friday, October 19, 2012
Posted by gkJr. at 6:51 PM
Media were invited this week to event scheduled for Monday, Oct. 29, in New York City. The invitation shows the Android search bar with the tagline "The playground is open," set against a cartoon New YOrk skyline.
In addition to a new Nexus phone expected to be built by LG, Google is also expected to show off the next major version of its Android mobile operating system, dubbed "Key Lime Pie." The successor to Android 4.1 "Jelly Bean" is expected to come preinstalled on the new LG handset, based on the Optimus G design.
Beyond falling less than a week after Apple's Oct. 23 event, Oct. 29 marks the same day that Microsoft will hold its own press briefing across the U.S., in San Francisco. That's where Microsoft will formally launch its own Windows Phone 8 platform, which aims to compete with both Apple's iPhone and devices running Google Android.
Google's event will be held at the Basketball City venue in New York, and will kick off at 10 a.m. Eastern, 7 a.m. Pacific. Microsoft has not yet revealed the venue in San Francisco for its Windows Phone 8 presentation, but it will begin at 1 p.m. Eastern, 10 a.m. Pacific.
And next Tuesday, Apple's event will be held in San Jose, Calif. at the California Theatre. The presentation, in which the company is expected to unveil a smaller iPad along with new Macs, will begin at 1 p.m. Eastern, 10 a.m. Pacific.
Posted by gkJr. at 12:10 AM
Thursday, October 18, 2012
Apple's invention for "Passive proximity detection" negates the need for the current IR sensor, replacing it with a system that can detect and process sound waves to determine how far away an object is from a portable device.
Much like passive echolocation or a loose interpretation of passive sonar, the filing describes a system that takes two sound wave samples, a "before" and an "after," and compares the two to determine if an external object's proximity to the device changed. "Sampling" occurs when a transducer, such as a microphone, picks up ambient sound and sends a corresponding signal to the device's processor for analysis.
The invention relies on basic acoustic principles as applied to modern electronics. For example, a microphone's signal equalization curve from an audio source changes when the device moves towards or away from an object, which "variably reflect[s] elements of the sound wave."
This effect may be noticed when sound is reflected by soft material as opposed to a hard surface. Generally, sound reflected off the soft surface will seem muted when compared to the same sound reflected off a hard surface located at the same distance and angle from an audio transducer and a sound source.
In one of the invention's embodiments, two microphones are situated at different planes on a device, and detect the subtle changes in broad-audio-spectrum caused by interference when a sound wave interacts with an object.
To relate this to a common phenomenon, when a sea shell is held up to one's ear a resonant cavity is formed that amplifies ambient sounds. This hi-Q filtering results in the ocean like sounds one hears.
In another example, response signals produced by two microphones located at either end of a device can be compared to determine if an object is nearer to one or the other. For example, when a user's face is close to the top of a device, as is usual when talking on the phone, the microphone located near the ear will produce a different reactance ratio than the microphone located at the device's base.
Microphones located at two ends of an iPhone.
Basically, the signals from two transducers, or microphones, detect slight changes in ambient sound and sends corresponding signals to a processor which then compares the two to determine whether an object is in close proximity to either of the mics.
Monitoring of the microphones can be live or set to take samples at predetermined intervals, such as after a user begins to speak. Placement of the microphones can also be tweaked, and in some cases can be located next to each other.
Finally, a more active detection method is proposed, where an internal speaker generates noise, taking the place of ambient sound waves.
Illustration of peak frequency compared to ambient noise signals produced by mics.
As portable electronic devices become increasingly smaller, the need to develop space-saving components, or to combine parts to serve a number of uses, becomes more pressing. Such is the case with Apple's latest iPhone 5, a device that packs 4G LTE, Wi-Fi and Bluetooth communications, a battery that can last for days, a 4-inch Retina display, two cameras, and a litany of other features into a chassis only 7.6 mm deep.
Space is already at a premium with the iPhone, as evidenced by the new Lightning connector, which Apple's Worldwide Marketing chief Phil Schiller said was needed to create such a thin device. Moving forward, the company is rumored to incorporate near field communications (NFC) for e-wallet payments, which will take up even more precious room.
It remains to be seen if Apple will one day employ the passive proximity detection technology in a consumer device, however the iPhone is a platform ripe for deployment as it already boaststhree mics for noise canceling and call quality purposes.
Posted by gkJr. at 7:10 PM
The filing, titled "Voice assignment for text-to-speech output," looks to create "speaker profiles" which can change the voice characteristics of TTS output to match parsed-out metadata like age, sex, dialect and other variables.
As noted by the application, many systems exist today to aid the visually impaired, including the system on Apple's iPhone, however most TTS engines "generate synthesized speech having voice characteristics of either a male speaker or a female speaker. Regardless of the gender of the speaker, the same voice is used for all text-to-speech conversion regardless of the source of the text being converted." Apple's invention proposes a different solution.
Instead of hearing the same voice for every message, the invention obtains metadata "directly from the communication or from a secondary source identified by the directly obtained metadata" to create the most suitable speaker profile.
According to the patent filing, "Providing a speech output that is associated with a speaker profile allows speaker recognition while providing a more enjoyable and entertaining experience for the listener."
An example is provided in which a user receives a message from "Charles Prince," who has an email address of firstname.lastname@example.org, regarding a party for "Albert." In this case, the system could use the ".uk" address as primary metadata. Secondary metadata can be gathered if a contact card is attached to the message, or if Charles Prince's information is already in the user's address book.
The data from the text and the corresponding metadata are then fed into a TTS engine, which assigns a speaker profile to convert the text into speech.
After converting each word and phonetic transcription in the text to distinct sounds that comprise a given language, the TTS engine then divides and marks rhythmic sounds like phrases, clauses and sentences.
In some implementations, speech can be created by piecing together pre-recorded voice fragments, including sounds, entire words or even sentences, that are stored on a mobile device or in an off-site database.
In other implementations, the TTS engine can include a synthesizer that "incorporates a model of the human vocal tract or other human voice characteristics to create a synthetic speech output according to the speaker profile."
One of the most interesting iterations notes that "a speaker's voice can be recorded and analyzed to generate voice data."
From the patent filing's description:
For example, the speaker's voice can be recorded by a recording application running on the device or during a telephone call (with permission). The voice characteristics of the speaker can be obtained using known voice recognition techniques. In this implementation, a speaker profile may not be necessary as the speaker's name can be directly associated with voice data stored in voice database.
As for output, the system may pick the ".uk" email address to use as primary metadata, taking contact card information like a birthday to determine sex and age, to subsequently output a speaker profile matching an older male with a British accent. Charles Prince's physical address, phone number, or picture can also be used to determine a speaker profile. The more metadata available, the more refined the output.
Flowchart of TTS system.
It is unclear if Apple plans to deploy such a system, however the company currently has a similar, albeit less advanced, system in place with Siri. While the feature is limited to certain regions, Siri has an option to choose dialects like "English (United States)" or "English (United Kingdom)" to recognize incoming voice commands, as well as provide responses in the selected accent.
Posted by gkJr. at 7:09 PM
Posted by gkJr. at 10:44 AM
Apple’s web traffic share among mobile devices is huge, according to new numbers from Chitika. The online ad network is seeing 43 percent of smartphone web usage coming through iPhones up to the 4S, plus another 3 percent from the iPhone 5 alone. By contrast, the Samsung Galaxy S III is driving 2 percent of mobile web traffic on its network, combined with 15 percent across all other Samsung mobile devices.
Both phones are taking up a huge percentage overall, however, with other smartphones combined adding up to just 37 percent overall. Apple and Samsung account for a total of 63 percent of the mobile traffic Chitika sees through its millions of daily ad impressions. Last week, the company told us that the iPhone 5 had quickly risen to surpass the GSIII as a traffic driver on its network, but these latest figures prove there’s no doubt which two companies are battling it out for overall smartphone market dominance.
Posted by gkJr. at 12:25 AM
At long last Samsung (005930) today announced that it will start pushing out the Galaxy S III’s Android 4.1 Jelly Bean upgrade to American users. However, the company did not release any specific timelines for when various carriers would have the upgrade ready, saying only that all American Galaxy S III customers would be able to download Jelly Bean from Kies “in the coming months.” Individual wireless carriers will make separate announcements to reveal when their customers can upgrade their devices to Jelly Bean. Samsung’s full press release is posted below.
Samsung Mobile to Begin Jelly Bean Update with TouchWiz® Enhancements for Galaxy S® III Smartphones in the U.S.Available in the coming months, the Galaxy S III update offers the latest Android™ platform; new camera, video and customization enhancements; and access to ESPN’s ScoreCenter app with custom AllShare® integrationDALLAS — October 17, 2012 — Samsung Telecommunications America, LLC (Samsung Mobile) – the No.1 mobile phone provider in the United States and a subsidiary of Samsung Electronics Co., Ltd., the No. 1 smartphone provider worldwide1 – continues its commitment to bringing the latest innovation to market with the rollout of Android 4.1, Jelly Bean, the latest version of the world’s most popular smartphone operating system, to all Galaxy S III smartphones in the U.S. in the coming months.The update will be made available both over the air and as a download via Kies, Samsung’s content sync and software update solution. The specific timing and update method will be announced by each carrier partner, AT&T, Sprint, T-Mobile, Verizon Wireless and U.S. Cellular.Galaxy S III owners will receive the Jelly Bean update as well as a host of new and enhanced TouchWiz features, making it a faster, richer and more responsive device experience. Samsung’s best-selling flagship smartphone just got even better.Samsung refined and enhanced the Galaxy S III experience by adding new capabilities to the camera, video and user interface, including:· Camera Enhancements:o New live camera and camcorder filters offer a range of new ways to spark your creativity. Warm vintage, cold vintage, black and white, sepia, color highlights (blue, green, red/yellow), and many more are selectable from the main camera screen.o Pause and resume while recording video allows users to string together multiple captured video clips from a party, birthday or sporting event into a single file with no post editing required.o Low light photo mode takes advantage of Galaxy S III’s best-in-class High Dynamic Range (HDR) capabilities and offers an optimized mode for low light and indoor photos.· Pop Up Play Update: Users can now easily resize or pause the Pop Up Play picture-in-picture video window, taking full advantage of the Galaxy S III’s powerful processor and large 4.8-inch screen.· Easy Mode: Easy Mode is a simplified user experience option for first-time smartphone owners, providing large home screen widgets that focus on the device essentials. The Easy widgets include both 4×2 and 4×4 arrangements of favorite contacts, favorite apps, favorite settings, clock and alarm.· Blocking Mode: Galaxy S III owners can disable incoming calls, notifications, alarms and LED indicators for a designated period of time.· Improved Usability: Users now have multiple keyboard options with the addition of the Swype® keyboard.Android 4.1 Jelly Bean offers users a smoother, faster and more fluid experience with expanded feature functionality, including:· Google Now™: Google Now gives users the right information at the right time, like how much traffic to expect before leaving work, when the next train is scheduled to arrive at the subway station or the score of a favorite team’s current game – conveniently delivered as notifications. Additionally, Google Now provides powerful voice assistant functionality across a range of domains, including weather, maps, navigation, search, image search, flight status and more. Google Now can conveniently be launched from the lock screen shortcut or by a long press on the menu button from any screen.· Rich Notifications: Notifications can now expand and shrink with a pinch to show the right amount of information a user needs. Notifications have been enhanced so action can be taken without having to launch the app first – like sharing a screenshot directly from the notification.· Automatic Widget Adjustment: Customizing the home screen is easier than ever before. Users can simply place a new icon or widget on the screen, and existing icons will move out of the way to make space. When widgets are too big, they automatically resize to fit on the screen.In addition to the operating system update, Samsung and ESPN worked together to integrate AllShare® technology into ESPN’s popular ScoreCenter® application. This means Galaxy S III owners will now be able to wirelessly push on-demand ESPN global sports coverage and highlights from the ESPN ScoreCenter app to their Samsung SMART TV™. When on the same Wi-Fi network as a Samsung SMART TV, a sharing icon will appear within the ScoreCenter video player which allows users to seamlessly push what they are watching to the TV. The ScoreCenter app with AllShare integration is available today for download through S Suggest™ on all U.S. Galaxy S III devices.With the Jelly Bean update, the Galaxy S III will also add support for some exciting new accessory experiences.· AllShare® Cast Wireless Hub: The AllShare Cast Wireless Hub accessory allows users to wirelessly mirror their phone screen to any HDTV or HDMI® display. Whether it’s sharing pictures, browsing the Web, playing games, streaming music, watching videos or projecting business presentations, users can control the action on the big screen wirelessly from their smartphone. AllShare Cast Wireless Hub even supports licensed content playback of premium TV and movies.· NFC One Touch Pairing Support: Galaxy S III can now pair with supporting NFC Bluetooth® accessories in a single touch. The Samsung Galaxy HM3300 Bluetooth headsetwill be the first Samsung portfolio accessory to support this functionality (available in the near future), allowing users to pair their headset by touching it to the back of their device.
Posted by gkJr. at 12:19 AM