Computers

Review: Ubi the ubiquitous computer (beta)

View 24 Images
To the front are stereo speakers, dual microphones and a small opening for the onboard sensors
The current beta model design differs from the prototype in 2012's Kickstarter campaign
The metal grille to the front masks two small speakers
The Gizmag beta review unit came with power adapter and instruction manual
The power adapter slots into the back of the housing before the unit is plugged straight into a wall outlet
To the front are stereo speakers, dual microphones and a small opening for the onboard sensors
A USB port at the side is not yet used, but has been included for future expansion
Close up of one of Ubi's speakers behind the metal grille
To the top of Ubi, you can see the temperature, humidity, light, sound and air pressure sensors
Ubi speaks for the first time, but doesn't give the answer I was looking for
The device is edged by a wavy multi-colored LED light band sandwiched between the front and back of the casing
The white light to one corner indicates a powered on and ready state
A blue LED lights up when Ubi detects the phrase "OK Ubi"
The setup process runs to about half a dozen screens, and is fairly straightforward
The setup process runs to about half a dozen screens, and is fairly straightforward
The setup process runs to about half a dozen screens, and is fairly straightforward
The Ubi Portal Command Center, which displays local sensor readings and icons to control the computer's operation
Ubi successfully sent emails to one of the two contacts I set up
Ubi currently offer two female voices to chat to
Custom behaviors can be set up, such as sending a stock email when a command is issued
A look at Ubi sans housing
Ubi computers running off the production line
Beta boards waiting to be housed
The original Kickstarter prototype is quite different from the current beta model
The original Kickstarter prototype is quite different from the current beta model
View gallery - 24 images

We last heard from Toronto-based Unified Computer Intelligence Corporation (UCIC) at the beginning of November 2012 when pre-order availability for its Ubi Wi-Fi-connected, voice-operated computer was announced. Last month, the company launched a beta testing program named Odyssey, where a limited number of developers and early adopters could get access to the pre-release system and help shape its future. Gizmag managed to grab one of those beta units, and has spent the last few weeks chatting away to the always-on ubiquitous computer.

Until quite recently, being able to interact with a computer system or virtual digital assistant without the need for physical input peripherals like mice or keyboards, or output devices like monitors, was the stuff of science fiction. Now many of us regularly use technology that allows us to control actions on our smartphones and wearables using voice commands. Google brought browser-based voice search to desktop computing in 2011, but hands-free, eyes-free interface excitement hit fever pitch the year after when UCIC launched a Kickstarter campaign for the development and release of its first product, something called Ubi.

The original Kickstarter prototype is quite different from the current beta model

Short for the ubiquitous computer, the always-on factoid oracle captured the collective imagination of more than a thousand backers to secure nearly US$230,000 in funding, 637 percent more than its original goal. UCIC founders Amin Abdossalami, Mahyar Fotoohi, Leor Grebler, and Blake Witkin started shipping out the first early bird units in October 2013 (a little later than originally planned) and have now just about fulfilled all Kickstarter orders. Development of Ubi continues apace of course, with the latest news being the launch of Project Odyssey.

Project Odyssey

The goal of this project is to give early adopters the chance to help shape Ubi's future direction, developing links to home automation devices and internet services and improving on the technology used to deliver what a user requests by voice.

"We're developing our natural language understanding and artificial intelligence engine and the hope with the project is to attract people to develop new interactions with the Ubi," Grebler told Gizmag. "It's much more developer-focused and rather than an 18 month wait for the Ubi, we'll only be taking orders once shipping times are more immediate."

The Odyssey project is limited to 5,000 participants at the time of writing, and to residents in the US. Candidates accepted into the program will be offered a beta unit for $299.

Gizmag dives deep into beta waters

The Gizmag beta review unit came with power adapter and instruction manual

After opening the box, the first thing that jumps out is the design change over the units featured in the Kickstarter campaign. The 4.72 x 4.72 x 1.57 in (120 x 120 x 40 mm) Ubi is now encased in a plastic shell with a circular metal grille to the front. Two small speakers can be seen behind the grille and there are two microphones to the top of the housing.

The device is edged by a wavy multi-colored LED light band sandwiched between the front and back of the casing. A USB port (which is not currently used, but reserved for future expansion) and a headphone jack have been placed on one side, and a supplied power adapter slots in the back before being plugged into a wall outlet.

As well as being a go-to source for traffic updates, local weather and the answers to all manner of questions, Ubi's onboard sensors monitor the environment around it, providing information on the temperature indoors, light and sound levels, air pressure and humidity. It can also fire off dictated emails or text messages, and will be able to imitate a VOIP telephone in the not-too-distant future.

"We hope the Ubi becomes one part of your daily gadget diet," said Grebler when asked where he thought Ubi might sit in a world of Siri, Now, Glass and Gear. "Wearables, smartphones, desktops, laptops etc. all have their spot. We hope the Ubi will fit into this ecosystem and offer a more natural interface. Google Now/Siri integration would be very interesting for us."

The Ubi looks so slick and polished that it's easy to forget that this is a device that's, as Grebler puts it, "definitely still beta and there are a lot of things that we're working on."

Inside the Ubiquitous Computer

"The Ubi has two components on it locally," Grebler revealed. "One is a board that we spent some time to manufacturer that includes temperature, light, humidity, and air pressure sensors, LEDs, stereo speakers, dual microphones, and an ARM processor where we do some DSP (and have the ability to update this remotely). The other component is a small Android computer that runs several apps on it that has 802.11b/g/n Wi-Fi, Bluetooth. It has a dual-core 1.6 GHz Cortex A9 ARM processor, 1 GB of RAM, and runs Android 4.1 Jelly Bean."

A look at Ubi sans housing

The unit features trigger technology that's powered by Sensory's TrulyHandsfree software, uses Google/Android libraries for speech recognition, and a combination of in-house systems and partner internet sources to find the answers to asked questions.

Setup

It is recommended that Google's Chrome browser is used to set up Ubi, and the notes also advise the use of a mobile device like a smartphone or tablet. I opted to use my Galaxy Note 8.0 and headed for the online setup page. Users need to register an email address and password to access the setup page, and then respond to a confirmation email before continuing. The process runs to about half a dozen screens, and is fairly straightforward.

The setup process runs to about half a dozen screens, and is fairly straightforward

The service is currently limited to WPA/WPA2-secured Wi-Fi only, and the setup smoothly takes you from UCIC's cloud portal to the device's own broadcasted network to enter your home network ID and password into the system (which Grebler told me are stored locally only, and not sent through to the UCIC's servers) and back again to complete the setup process.

After setup, the Ubi Portal displays the local environmental readings picked up by Ubi's sensors. It's also the place to change settings and add contacts to Ubi's address book. Custom commands can be set up and Ubi can be linked to devices in a home automation setup. Ubi currently has two female voices to choose from, the review unit was defaulted to Kendra, but its Ubi voice is just as pleasant and is the one I have settled on. App and firmware updates are undertaken automatically.

Since the fanless device is meant to be always on, I asked about what would happen in the unlikely event of it starting to run hot. I was informed that Ubi will auto reset if it detects an issue, which should help to cool things down. Obviously during this reset period, Ubi will go offline, but for no more than 4 minutes.

Awaiting your command

A blue LED lights up when Ubi detects the phrase "OK Ubi"

Ubi listens for the wake-up phrase "OK Ubi" and opens up Android's speech recognition library when it detects a user has uttered the magic words. The LED strip flashes blue to indicate a ready state. A user would then voice a query. Ubi waits for a pause, and then sends the data to Google for text conversion. This is retrieved by Ubi and sent to UCIC's servers (the Ubi Cloud), where the natural language understanding engine (NLU) parses the text to try and understand what the user needs (look up information, integrate with home automation, or send a message, for example).

Once deciphered, the desired action is taken or information retrieved and the results are spoken out through a text-to-speech engine via the stereo speakers. Though this might sound like each request would take an absolute age to deal with, Ubi is actually pretty quick – most answers are returned within a second or two.

I have to admit that I found having to say the wake up phrase prior to each query a little tiresome, but it's about par for the course with similarly voice-enabled technology – like needing to say "OK Glass" each time you want Google Glass to do your bidding, for example (though some actions can be undertaken by swiping or tapping the touchpad, with Ubi it's voice or nothing).

The development of the NLU is described as currently being "early and basic" and I encountered many misfires, where the speech recognition algorithms just didn't seem to know what the hell I was trying to say. I asked Ubi who invented the Stratocaster, for example, and it thought I'd asked for who invented "stuff." Cheekily, I asked Ubi to describe Gizmag, it responded with "I wished you could tell me about Gizmag."

The device is edged by a wavy multi-colored LED light band sandwiched between the front and back of the casing

Though trigger sensitivity can be adjusted in the Command Center of the Ubi Portal, speaking clearly is also important for best results. "It's important to annunciate each word as slurring them together tends to cause issues," Grebler told me. "Also, speaking more slowly helps. We hope to be able to adjust for different English speaking accents in the coming months. Up to 6 ft (2 m) away or closer tends to work and environments without a lot of background noise. We are working to improve this."

I did indeed get more successful hits by improving my diction, but asking questions in RP is not the most natural way to interact with a computerized knowledge base, and did raise a few eyebrows. Possible issues with my accent aside, the more specific the query to Ubi, the better the chance of hitting pay dirt. The question "Who is the President of France?" would result in a description of the role, whereas "Who is the current President of France?" would get me the answer I was looking for.

On occasion, even if Ubi recognized what I was asking, the answer could be surprising. I asked "Who is Joe Satriani?" and Ubi's logs (more on this later) showed that the question was understood. The answer, however, was "Stupid is a song written by Sarah McLachlan..."

Ubi can also play music to help a beta tester chill out, though this is not without issue. You could just let Ubi decide by telling it to play music, or you could try to find something more specific to your tastes. Either way, it searches for song matches on SoundCloud, which means that finding what you want won't always be possible. When Ubi does find something to suit your mood, about 10 seconds in it will ask if you want to continue to the end of the song.

This is handy if you want to end the chosen caterwaul, but just a tad annoying if you're just getting into the groove. Once you instruct Ubi to play the song to the end, there's currently no way to interrupt playback, though such options are on the to-do list. Interestingly, I know that one of Gizmag's writers has tracks on SoundCloud, so I asked for him by name. Sadly, I was misunderstood and didn't get to hear his hypnotic guitar music.

The Ubi Portal Command Center, which displays local sensor readings and icons to control the computer's operation

Ubi's online Command Center shows information about local light and sound levels, temperature and other readings picked up by its onboard sensors, but even armed with the phrases Ubi might expect to hear before delivering this kind of information, I was unable to get the device to tell me what I wanted to know. Grebler told me that this aspect is still being fleshed out, and is currently up and down, which is a great pity.

Of the two email addresses I saved to the Contacts page, I was only able to get the Ubi to send a message to one of them, which might again be down to my non-American accent (though I did also try a few John Wayne and Clint Eastwood impersonations, without success).

To sum up, development has come a long way in a fairly short time, but there's obviously still a fair amount of work to be done to knock the system into shape. Despite its faults and foibles, this little box of tricks is fun to explore and gives a good indication of things to come.

Logging your every word

A section of the Command Center in the Ubi Portal is reserved for user interaction logs. This is at once intriguing, interesting, eye-opening and worrying. After chuckling through some of Ubi's erroneous interpretations of my requests (and seeing a few expletives reproduced), my thoughts turned to issues of privacy and security.

Grebler explained that user logs "are encrypted and stored on our server. Our technicians do have access to the utterances, but not anything that goes into a message (for example, if it asks to speak the content, we don't see it). Currently, we're using this to test accuracy for the beta tester program but as we proceed out of beta, we'll disconnect and fully anonymize this information. Only the user has access besides us. We don't share, sell, rent, etc. this information. After the beta, we may only have an opt-in model."

Ubi isn't "listening" all of the time. It waits for the wake-up phrase before starting the conversion to text and deciphering magic, though users can press the "U" button to the front of the housing to mute the microphone if paranoia gets the better of them. User data stored on UCIC's servers is deleted after seven days.

The bottom line

The Ubi project is still in beta, so expecting it to perform flawlessly is unreasonable. The hardware seems pretty much ready for public consumption, but the behind the scenes mechanics still have some kinks that need ironing out. As it stands, those who will get the most from getting their hands on a pre-release Project Odyssey unit are third party developers or intrepid testers who want to help move the project forward into the commercial space.

The white light to one corner indicates a powered on and ready state

For those not able to snag an invite to the beta party, it will be a little while yet before this voice-activated, every-ready, all-knowing computer pal makes its consumer debut. But if my experiences with the Ubi review model are anything to go by, I think it will be worth the wait. We'll keep you posted.

Source: UCIC

View gallery - 24 images
  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
2 comments
Daishi
I wonder how much something like this would benifit from using Wolfram language: www.youtube.com/watch?v=_P9HqHVPeik
It seems like it would save a lot of the effort involved unless that is the platform they are building on. Wolfram language (and Mathematica) come with Rasberry Pi but only for non commercial use.
At least right now for me I don't think I could get much use out of having voice only communication with a computer. Even if I am limited by voice I don't see a good reason responses back to me have to follow the same limitation unless I am visually impaired.
They say a picture is worth a 1000 words and I think most of the time I would simply be asking to the computer to fetch some type of data that I could probably look at faster than having it read to me.
Even something as simple as the weather forecast is greatly helped along by visual aid and as seen here there is value in just seeing how the computer is logging the words you are speaking.
HighPockets
I have GoogleNow on my Moto X and I love it, but ...... you have to realize that it has the vocabulary of a high school student. I am an academic and Google is teaching me to use simple, basic words. On the other hand, it recognizes such arcana as Polyphemus (and arcana itself).