Some discussion has begun on Quora.com about the current limitations of the Google Glass Developer edition and why Google has made it’s current choices.
One Glass Explorer writes:
As a developer I was excited about the concept of Google Glass, and could not wait to read the API specifications when they were released (kudos to those who can relate), however, after a brief review I was underwhelmed.
From an application standpoint, it seems there are quite a few things missing:
- No augmented reality (screen is a tiny square in the upper right of your view)
- No ability to access audio-visual data (I can’t access the microphone or camera)
- No cellular (you will need to pair it with a smartphone and drain its battery too)
- No hardware access period.
Although there are some built-in features for end users, as a developer, the Google Glass appears to be nothing more than a limited wearable display that I can push very limited static content into. It’s Twitter, but beamed straight into the user’s eyeballs. I’m sure some clever individuals will think of some things to do with that, but every idea I’ve come up with so far requires access to the hardware.
Here are some apps that can’t be built because of this:
- Facial recognition (can’t access camera)
- Lifecasting (can’t access camera or audio)
- Item price checking (can’t access camera)
- Augmented reality video games (screen isn’t big enough)
- Augmented reality GPS (no access to user location unless paired to Android)
I feel like this is the stuff people have been talking about for years, and the hardware is there, but it’s clearly not on the immediate roadmap. Why?!
The Gilmour Gang have done a recent video on Glass, talking to Robert Scoble’s about his first impressions using the Glass Explorer Edition. Much of the above points were covered. In particular much of the features we see are to meet a compromise on current technological limits and also on functionality without obscuring the users view with a display.
Robert replied to the above comments:
I talked with some people on the Google Glass team when I picked mine up earlier this week about some of these issues.
The team made a philosophical choice to have the screen above your eye line to keep it “human.” Also to avoid distraction issues when walking around or driving. The battery life is a real problem too. One six-minute video I did took 20% of the battery. So, Google designed these to have a very simplistic UI, cards, and have them on screen for just a few seconds, to save battery.
There are two additional concerns:
1. Google wants to make Google+ the centerpiece of the Glass experience.
2. Google wants to keep people from getting freaked out about privacy concerns.
Add all these things together and you can see why Google doesn’t want to allow in-app image processing.
It’s frustrating, yes, but after seeing these constrains and having the Glass for a few days now, I get why Google made the API choices it did. Will Google add more to the API over time? I bet it will.
I can solve the battery issues with an external battery pack, but so many people are so freaked out about the privacy concerns that I’m not sure we’ll quickly see an answer to those.
As to the “strategy taxes” that Google faces internally (IE, make Google+ the first-class Glass citizen), not sure how to solve those. Microsoft faced similar problems on its Tablet PC and it took eight years for Apple to come along and blow away Microsoft’s efforts with the iPad. That solved those issues. I bet that in the next five years we could see a competitive product from, say, Facebook, that will blow away the Google effort.