As industrious hackers begin getting units, every inch of the device will no doubt be cataloged soon enough. For now, consumers will have to settle for a few of the initial specs from the developer version of the gadget.
Jay Lee, a software developer for Google Apps reseller Dito, got his Glass unit this week and has been geeking-out over the device on Google+. He has begun toying around with the device’s debug mode and listed some key specs for the Glass, including the processor and memory included:
I realize that with innovative products like Glass, the experience is more important than the hardware specs. And the experience is pretty incredible! Having said that, it's Friday, I'm a geek and it's still awesome to nerd out on the guts. +Liam McLoughlin (Hexxeh) also found the USB debugging setting and got ADB working (looks like it was broken on my primary machine). Once I got it working I pulled up some details about Glass. Key points are:
* It's running Android 4.0.4 – Ice Cream Sandwich – just as Larry Page said * It's an OMAP 4430 CPU – Dual Core? – Having trouble finding exact mhz * There's 682mb of RAM (678052kb reported in /proc/meminfo). Kernel messages lead me to believe it's actually 1gb but some is being used for other hardware purposes(?)
If you know Android pretty well and have additional questions on the Hardware or Glass OS you'd like answered (and know the commands that will answer them), feel free to post in the comments and I'll see what I can do.
To put some of the info in context, the OMAP 4430 CPU has been used in mobile devices such as Samsung’s Galaxy S II and the Kindle Fire. Previous reports have shown that Glass has a 640 x 360 display and a 5-megapixel camera. In other words, Glass won’t match up to this year’s (or last year’s) cutting-edge smartphones in terms of power, though it’s still plenty powerful for a wearable computer.
Learning a foreign language is hard. I’m sure you all know this, but I just wanted to remind you how soul-crushingly difficult it is for anybody over the age of 10 to wrap their brains around a new language. I took Japanese for three years and I still don’t know much of anything. Google’s Project Glass may help speed that up a bit.
Will Powell, the Oxford developer who created his own pair of Google Glasses, is at it again. This time he has implemented real-time translation into the glasses that displays what the person next to you is speaking in real time via text. While Powell’s technology uses Microsoft Translate, it’s not much of a stretch to think Google would use their own translation software in Project Glass. Check it out:
The video is more of a proof of concept than anything at the moment, but Google should pay attention. If they want people to use Project Glass in their everyday life, it needs to do more than just take first-person pictures.
It should be noted that Spanish is a particularly easy language to translate to English and she was speaking rather slowly. It would be interesting to see how this technology handles native Spanish speakers who are known for speaking faster than I thought humanly possible.
Does Project Glass really make learning another language unnecessary? Not at the moment, but it does have the immediate advantage of facilitating faster learning. Being able to see what somebody is saying to you in real time without having to consult a translation dictionary would be a great benefit for those trying to learn a language. I might even be able to finally learn more Japanese beyond the 20-something odd phrases I know.
The future is here, folks. Now it’s up to Google and the developers who paid $1,500 for a pair of glasses to make it happen.
By now you’ve probably heard of Google’s Project Glass – aka Google Glasses. Google revealed this amazing technology back in April. According to statements made during last week’s Google I/O conference, they should be getting to consumer hands by next year.
If you’re much of a tech fan at all, you’re probably drooling like crazy over these things. If you’re an Apple fan, then you may be wishing that Apple would hurry up and launch their own version. Well, it looks like you may get your wish. Apple has been awarded a patent for something that looks an awful lot like its own version of Project Glass. The patent, which Apple originally applied for way back in 2006, is titled “Peripheral treatment for head-mounted displays” and covers “[m]ethods and apparatus, including computer program products, implementing and using techniques for projecting a source image in a head-mounted display apparatus for a user.” In other words, Apple’s own Project Glass. Or iGlasses, if you will.
Strictly speaking, the patent is focused on methods for reducing eyestrain that can be caused by having a head-mounted display so close to the wearer’s eye. Nevertheless, it means that Apple is at least somewhat interested in developing its own wearable display. Whether they ever actually bring such a product to market is still very much up in the air. A lot will likely depend on how well Project Glass does.
I think we can all agree that Google showed off the potential of Project Glass during Google I/O in an extreme way. Unfortunately, not everybody is a skydiving, extreme biking, rappelling sports star. What can Project Glass do for the less extreme person? Google has started a campaign to answer that very question.
You may remember a decidedly-less extreme video during the Project Glass presentation during the keynote. It featured a mother wearing Project Glass and showing off her new-born child to her family via a Google+ hangout. The cool thing was that the existence of Project Glass allowed her to shoot all of her child’s moments in first-person. It was like filming a child’s life but with the added benefit of broadcasting it live to the family instead of sending them DVDs five months down the line.
Google has now revealed the purpose of that video – it’s called Glass Sessions. It’s the first in a series of videos that is meant to show “what it’s like to use Glass while we build it, through the eyes of a real person, in real life.” Of course, using a baby is going to tug at the cute heartstrings that have been conditioned after so many pictures of cute puppies and kittens that we see all over the Internet. It’s an effective marketing tool and will probably get a few mothers more than interested in the technology.
Remember, this is the first in a series. Google is remaining tight-lipped about what’s up next, but I’m sure it’s going to amaze us again. I’m personally hoping to see extreme uses like a mountain climber using Project Glass to snap pictures of his climb up Mt. Everest.
We think Glass helps you share your life as you’re living it; from life’s big moments to everyday experiences.
Today we’re kicking off what we’re calling Glass Sessions, where you can experience what it’s like to use Glass while we build it, through the eyes of a real person, in real life. The first Glass Session follows Laetitia Gayno, the wife of a Googler, as she shares her story of welcoming a new baby, capturing every smile, and showing her entire family back in France every “first” through Hangouts.
Google ended the Google I/O Day 2 keynote by rehashing yesterday’s Project Glass skydiving stunt, but “from a different perspective”. This time, it was much less exciting, especially once the reality set in that they weren’t really announcing anything new.
Perhaps the biggest takeaway from the whole thing was that Brin was wearing some kind of clip-on sunglasses for Google Glass, which he referred to as a new iteration of Google Glass: “shade clip-in.”
Essentially, viewers saw the same stunt as the one Google pulled off yesterday performed again, but we were given a behind-the-scenes Hang out view from the players’ perspective.
If you haven’t seen the stunt, you can watch the full presentation here:
Google’s latest big endeavor, Project Glass, has been several years in the making and is still not even finished, but as its creators test out what it can do and broadcast it to the public, the web is buzzing with talk about what it could mean for the future.
While Glass is available for pre-order now for developers who attended this year’s Google I/O conference–at the hefty price of $1500–it won’t be ready for another year, and it won’t be put onto the mass market until at least 2014. For those skeptics who don’t quite understand why anyone would want to wear a headset that would keep them jacked in at all times to technology, the team at Google prepared a little presentation on all the awesome things you can do with it.
Of course, the Twitterverse is one of the busiest places regarding speculation and conversation about Project Glass. Here’s what people are saying:
This year’s Google I/O keynote presentation was a whirlwind of huge announcements. Developers and reporters at the conference would have been overwhelmed with just the Android 4.1 Jelly Bean, Nexus 7, and Nexus Q announcements. Nobody was holding out too much hope for a big Google Glass announcement, but Sergey Brin wasn’t about to have his pet project upstaged by tablets and music spheres. The Google co-founder staged a massive set piece involving a Google+ Hangout with skydivers, cyclists, and repellers. Luckily, no one died, and Brin announced that conference attendees will have a chance to pre-order the “Explorer Edition” of Glass for $1,500.
That was all of the big news about Glass announced yesterday, but Google also began what could be considered an even more important – and more risky – public relations strategy by opening up about its hopes for Glass, and even allowing a few reporters some hands-on time with the device.
Isabelle Olsson (pictured above), the industrial designer behind the current Glass design and a large part of yesterday’s presentation, was roaming the convention with a Glass on her head. Business Insider‘s Owen Thomas was able to quiz her about the Glass design. Olsson told the reporter that her inspiration was to make the Glass headset “as minimal as possible without being boring.”
All Things D‘s Liz Gannes also spoke to Olsson, who told Gannes that she obsessively weighs new Glass prototypes down to fractions of a gram. Brin, who was also showing off Glass at the conference, told Gannes that he hopes the phones will be less disruptive than phones, which require the user to look down to use. Gannes’ impression of Glass, which Brin allowed her to try on, was that the device was “not immersive,” which is Brin’s point.
TechCrunch‘s Peter Ha had a similar opinion of the device, stating that the display did not hinder his ability to look around and disappeared until it was needed. Ha also reported overhearing Brin state that the battery-life of the device is a focus of their design. The current prototype lasts for 6 hours and the design team is looking at ways to extend battery life through software. Brin also told Ha that the consumer version of Glass will be “significantly” less expensive than the $1,500 version that conference attendees could pre-order, though the Glass team isn’t focused on making the device as cheap as possible.
CNET‘s Rafe Needleman stated that the device was locked into a “demo mode” that only showed a looped video of fireworks. He describes the image, as other have, to be postage stamp sized, and the perspective of the display shifted as his head moved. The audio of the prototype required Needleman to cup his hand over his right ear, a “feature” Brin told him is good for letting others know that the device is currently the center of attention.
It’s good that Google is keeping its “beta” culture alive and not keeping its projects secret until launch. It can be fun for a company to drop a technology bomb on the industry the way Apple did with the iPhone, but Google’s approach seems to build its own kind of excitement through anticipation. And, if Glass is able to go to market next year the way Brin hopes, the device just might disrupt the smartphone market as a bomb would anyway.
It’s Google’s day to shine with their developer event (Google I/O, of course) going on in San Francisco, and while people wait for Google to announce the next big thing to coming down their pipes, the fact that Sergey Brin has been walking around in Google Glasses might give us an idea of where the company sees itself in a few years. Until then, however, we can live vicariously through Brin.
As pointed out by folks who are in attendance, Brin has been spotted wearing the Google Glass headset, although, pictures of him at the Google I/O with his Google Glasses on haven’t been taken and/or released yet. The lead image was taken when the headset first appeared. The find was pointed out on Twitter:
In case you’re wondering what, exactly, one can do if they decide they want to purchase a pair of Google Glasses, the following video gives us an idea of what users can expect:
Considering how much information Google has, and will have on their users after Android users update their devices to include Google Now, Google’s answer to Apple’s Siri, is giving that kind of access to your day-to-day lives, provided you’re wearing the Google Glass headset all day, something you’re comfortable with?
Despite reports that the Google Glass headsets are only a prototype, Google co-founder Sergey Brin stated last week that he hopes Glass will be available to consumers by next year. It looks as if Brin’s timetable is on schedule, as Google has recently been conducting marketing research for the devices.
Xavier Lanier over at GottaBeMobile is reporting that Google has been setting up booths at San Francisco street fairs and giving the public the chance to try out the cutting-edge technology. Lanier found one of the booths at the Union Street fair and quizzed the booth attendant on what was inside the tent, but was told he could only go in if he “qualified.” From Lanier:
I chatted briefly with a couple of survey participants who were milling around. They told me Google made them agree not to talk about the survey or what they saw, but the product “lets people see stuff and take photos like…in a really cool way.”
The screening survey included questions about outdoor physical activities, numbers of social network followers, technology buying habits, and phone platform use. From the questions, it appears that Google is trying to get away from the tech-geek crowd, who generally have a “love it or hate it” attitude toward the device. I suppose Google will soon know what outdoorsy early adopters with lots of Facebook friends think about Glass.
For what it’s worth, Lanier said he qualified for the research, and he describes himself as a “30-something dad that is into new technology, active on social networks and spends time outdoors.” He was told, however, that he would have to come back another day.
Forbes has a report on Google Glass comparing it to Instagram and the new Facebook Camera app for iOS. The report argues that Glass will be standing in the way of Facebook’s maneuvering for a larger mobile presence.
These sorts of predictions might be a little hasty, for two reasons. First, Facebook has not yet been able to get a firm foothold in the mobile realm. Though Facebook CEO Mark Zuckerberg has promised investors a greater focus on mobile (and might deliver on that promise with a Facebook phone), the company still has a long way to go before mobile ads begin taking in the type of revenue Google is seeing from mobile.
Second, Google Glass is by no means a guaranteed hit. Though smartphones are beginning to proliferate into every corner of the U.S. market, wearable computers will take quite a while to catch on. Perhaps Apple could market a Glass-type device and make it seem “cool,” but I suspect Glass will be to mobile devices what early Android was to mobile OS’s: the more functional alternative for early-adopter techies. Plus, there is still a stigma hanging over wearable devices that was created by bluetooth earpieces early last decade.
There is a point to be made about the connection of Google Glass and social networks, though it was missed in the Forbes article. The first time a Googler allowed a non-Googler to wear one of the Glass headsets in public was last week, when Sergey Brin allowed a handful of photographers to don the device and walk through downtown San Francisco. This was no spur-of-the-moment decision. One of the few niche communities to embrace Google+ fully has been the photography community. The well-designed gallery views and Picasa integration are just two of the reasons photographers have adopted Google+ as their own.
By allowing professional photographers to preview the device, Google is signaling that Project Glass is not simply a mobile device, but also as a social device. Photos are a key part of modern human social interaction, and Glass will provide a way for users to share their first-hand experiences, to allow others to see things from their perspective. Facebook wants users to fill their Timelines with a record of their life. However, if all of those experiences are filtered through the view of a Google product, will Facebook ever be able to get ahead?
Despite showing the headsets off quite a bit, Google has been very protective of who it lets wear Google Glass. It seems the company is opening up more about the technology though. Last week Sergey Brin, Google’s co-founder, allowed several photographers to try out the specs on a field trip around downtown San Francisco during the Google+ Photographer’s Conference. At the conference, a video taken by a Google Glass headset was shown.
This week, Brin appeared on The Gavin Newsom Show, the California Lieutenant Governor’s talk show on Current TV. There, Brin showed off Glass, discussed the future of Google, and revealed what his current work at the company entails. Current has released a clip of the hour-long interview, which can be seen below. During the interview Brin allows the host to try on Glass, showing Newsom a picture of himself that was just taken by Brin.
Newsom asked some very probing questions of Brin during the short clip Current has revealed. When asked how long Glass has been in development, Brin stated the project was “two or three” years old, but that he had only been involved heavily for the past year. Amazingly, Brin also stated that he hopes to get Glass out onto the market by next year, though he reiterated that it is just a hope. Currently, the Glass headsets are only an early prototype, without even a proper user interface.
Brin told Newsom that Google X, the Google division behind Glass and Google’s self-driving cars, is now his primary focus at Google. “It’s sort of an advanced skunkworks, and we try to prototype really far-out projects,” said Brin.
The clip from the interview is embedded below. The rest of the interview will air at 11:00 am EDT this Friday on Current during The Gavin Newsom Show. Those without the Current cable channel will have to wait until the full interview is posted on the Current website.
For something that’s just a prototype, Google sure is showing off its Project Glass headsets quite a bit. From public television to Google’s CEO, the company is using Glass to shore up its futurist credentials.
Yesterday, Google co-founder Sergey Brin showed off the project in front of hundreds of professional photographers at the Google+ Photographer’s Conference in San Francisco. His presentation included photos taken with the device, and also something that hasn’t been seen before: a video shot with Google Glass. It’s grainy and is in no way high-definition, or even of marginally good quality, but it’s real. The photos and the video from Brin’s presentation have all been released on the Project Glass Google+ page, and the video is embedded below:
In addition to the presentation photos and the trampoline video, Google Developer Advocate Chris Chabot uploaded an album full of pictures he took during the photographer’s conference. He, Brin, and a small group of photographers took a walk around San Francisco while trying out Google Glass. From Chabot’s Google+ post about the event:
I think the general reaction for anyone who got to put them on was whoah, we’re in the future already it’s such an exciting project and to imagine what will be possible with these in the future, is like reading a scifi story, to me it’s right up there with self driving cars and jet packs, apparently the future really is now!
Did he just say jet packs? Is that what’s coming next from Google X? Yes, please.
Photos are clearly an important part of Google’s larger strategy, and Google+ has been an impressive offering into the photography world. Google is working on what could be an even more impressive offering, however, with Project Glass (aka: Google Glass), the company’s futuristic glasses project.
Google has been clear from the beginning that the glasses portrayed in the introductory Project Glass video were a little more concept than reality. They’re in the early stages, and so far, it’s not about augmented reality, but Google does seem to be playing up the photography angle pretty hard.
Project Glass engineer Sebastian Thrun recently went on Charlie Rose and demonstrated Google Glass-based photography. Now, at Google’s Google+ Photogaphers’ Conference, they’ve been letting photographers play with them.
Google co-founder Sergey Brin led a photowalk, and several participants have been posting about it on Google+:
Google Co-Founder Sergey Brin stopped by the #gpluspc conference today for a photo walk, and his crew had some Google Glass prototypes on hand for people to try out.
He was a good sport about me playing the part of paparazzi, thankfully 🙂 6 1 Powered by socialditto
This week, Google’s CEO (and other co-founder) Larry Page wore the glasses as he spoke at Zeatgeist 2012:
Actually, it’s only Google Glass, as opposed to “glasses”. Page notes that this is the case because the glass is only on one side.
“It’s still in a bit of an early stage, but I’m still excited to have one, and have it working,” Page told the audience, adding, “It doesn’t yet show me all of your names, but I’m really glad that you’re all here.”
It is important to remember that Google Glass is indeed only in its early stages, and does not do all the stuff in Google’s concept video:
They don’t do augmented reality yet, but clearly Google is aiming high with these things (contact lenses may even emerge at some point). Google seems to really be promoting them, either way, as evidenced by Page speaking while wearing them.
Can’t wait for the Google Glass headsets to arrive? A company called Vergence Labs has some eyewear that just might tide you over until Google Glass is finally out of beta.
The company has started a Kickstarter campaign to raise $50,000 for what they claim are the world’s first electric-powered sunglasses. The basic pair of sunglasses the startup touts turns the glasses from shades to normal glasses with the flick of a switch. In addition, the specs have a 720p video camera on the bridge of the frames, allowing them to record a user’s-eye view to a microSD card. Kickstarter pledgers who pledge $199 dollars or more will get their very own electric sunglasses. The glasses currently only come in large, blocky frames. Luckily for Vergence, that style is currently popular, though the company is still working to miniaturize the components to fit them into smaller frames.
Vergence hopes users of the electric glasses will upload videos they take to their future website, YouGen.Tv. The company suggests that users would be able to share their view of life and help educate the world on what they see. In reality, though, if these glasses take off as much as Vergence’s founders hope, they are going to be dealing with a huge chatroulette-type problem.
Aside from the basic model of electric sunglasses, Vergence is raising money for it’s “immersive reality” visor glasses, seen above. The visor, which looks like something a fighter-pilot would wear, displays an overlay that can be programmed to interact with elements from the visor’s view. This is the same type of augmented reality that Google Glass promised with its announcement trailer, and it can be had for pledging $7,000 to the Kickstarter campaign. The device is only shown running some face-recognition software, but one Vergence co-founder claims the company hopes to implement gestural interfaces in the near future.
That future had better be very soon, as the Kickstarter campaign estimates that both the electric sunglasses and the augmented-reality visor are due to ship before Christmas of this year. That gives the Vergence just over seven months to perfect something that appears to still be an early prototype. Watch the founders of Vergence Labs pitch their products in the video below, and decide for yourself if their ideas could be implemented by December.
We’ve all seen the parodies. More than we can count, at this point. Still, all the attention Google’s Glass project has gotten can be interpreted as giddy excitement for the cyborg-vision we’ve all wanted since childhood. It’s sad, then, to learn that the current sets of Glass glasses that have been shown off by Sergey Brin and Sebastian Thrun are nowhere near what the announcement video for the project made them seem to be.
Rafe Needleman, a writer for CNET, points out that Google Glass will only display information on the edge of a person’s view. Specifically, information is provided at the top of a person’s line of sight. Needleman cites a Google spokesperson who isn’t Vic Gundotra, Google’s senior vice president of social, as confirming the news. He reports being told that it is too early to even know what functions the devices will have or what type of UI they will use.
It’s comforting to know that text messages won’t be popping up in front of our faces as we walk down the street. All the same, it’s disappointing that the future-world where an augmented, virtual worldview is provided and layered over every real-life object is still years away. The prototypes, it turns out, are actually prototypes. Sometimes Google can be such a tease.
For those who still think Google’s Glass project is simply an experiment, a joke, or a public relations stunt, today brings even more evidence that the augmented reality glasses are real. Google has been granted three different patents (1, 2, 3) today for the design of different Google Glass headsets.
One, seen above, is the nose-wire frames Google has already shown off is several places. Two of the patents cover this design, which seems to be what Google has settled on for its main design. The other design patent, though, hints at how Google Glass might look with a full pair of glasses. The design, seen below, shows a more traditional pair of spectacles which could, conceivably, be fitted with prescription frames.
Though the wire-nose frames look more design-conscious than most Google projects in beta, both designs look decidedly industrial. It will probably take an Apple design team or someone else comparable to create Google Glass frames that will truly capture the public’s imagination. Glasses designers such as Oakley, which claims to have been developing augmented reality specs itself, will be needed to help Google market the technology.
As seen in the pictures, there is a thicker area on the back of one of the sides of the frames where, presumably, the hardware for the glasses will be housed. It is interesting to note that the spot seems to be curved to place it near the back of the head when worn, perhaps because there is no doubt that spot will get warm when users are watching or streaming video. I’m thinking I should patent a design for heat-sink earmuffs that re-route the heat to keep ears warm.
Although Google’s Project Glass (or Google Glasses) continues to take a pounding on the interwebs, it’s ridiculous to try and remain unexcited by the possibilities of this new technology. Sure, people wearing Google Glasses could get distracted and walk into a street sign – but that’s really a small price to pay for some of the amazing things that we might be able to do with the final product.
And in order to make sure we have a great final product, Googlers are out testing Project Glass prototypes. These real-world appearances of the Google Glasses have a way of winding up on Google+.
Last month, Google X’s Sebastian Thrun sat down with Charlie Rose to discuss the project. During that interview, he snapped a picture of Rose with his Google Glasses and posted it to his Google+ profile (also with his Glasses).
That photo failed to truly impress. But this newest photo taken with the Project Glass spectacles shows just how awesome these things could be when regular people start getting their hands on them.
We announced Project Glass in part to let our team start testing prototypes outside the office. +Sebastian Thrun, one of our project leaders, tried one out last weekend and we just had to share the result.
We’d love to hear about the types of moments you’d capture if you didn’t have to wait to pull out a camera or your phone. Please share your thoughts in the comments!
That looks like another eventual marketing strategy for Google: “for all those times you missed a moment because you didn’t have your camera at the ready.” The photo, of Thrun and his son, really is cool:
Now that we know about Project Glass and the fact that other companies are working on their own prototypes of similar technology, it’s only a matter of time until the camera once again evolves. We just now hit the point where almost everyone has a camera in their pockets – and soon many people will have a camera on their face.
I absolutely love Project Glass, Google’s smart glasses project. While the concept itself is exciting enough, the potential uses in the future for the device are pretty awesome. From being a Terminator to playing Battlefield 5 in an empty lot, the potential is limitless. The latest concept/parody video might be the most truthful of all the videos though as it shows us what happens when Project Glasses gets into the hands of the Internet.
YouTube user ElectRoulette has created what will probably be the Internet’s first use for Project Glass – turning them into Meme Glasses. What follows is a day in the life of what you might call a casual purveyor of Internet culture. I’m sure it would be much worse for the people that live and breathe the Internet.
Of course, there are some pitfalls that come with the Meme Glasses beyond using memes over a safe limit. The concept of being able to spot ninjas in the open is a terrifying prospect indeed. The glasses being able to tell your ethnicity just by how you drive might also open the door to some potential litigation.
Regardless, this is one of the best parody videos yet, because I can see this actually happening within a year of the glasses being on sale. Give people the power to take memes anywhere they go and the world would be a much better (and funnier) place. The only problem would be updating the glasses with the most recent memes. As we all know, there’s nothing worse than a tired, over-used meme.
Google got a lot of people thinking when it unveiled Project Glass and its promo video showing how Google glasses of the future could work. It was cool. Plain and simple.
Of course as they often aren’t, things aren’t really that simple. It was a concept video, and few truly know what Google’s glasses in their current state can really do, or even how close Google really is to the currently fictional reality portrayed in the video. Experts in the field of augmented reality have expressed a fair amount of doubt, though there is still plenty of excitement coming from them as well.
AR technology concept broker/analyst Marianne Lindsell tells WebProNews, “Like the ‘Stark HUD’ concept (produced as far I can see as a sort of teaser for the Iron Man II film) I do suspect that the Google Project Glass video has a strong ‘Hollywood’ element.”
“I would guess that Google are both testing the market and managing expectation,” she says. “However also like Stark HUD, there has clearly been some use of technology in the production – and in the case of Project Glass, the tech/Hollywood ratio is I suspect much higher, if less than 100%.”
“The least realistic parts of the Google Glass video clip in my opinion are the field of view (a large FOV is needed – but can such a small device provide it?), the brightness (possibly – but there are some good techs out there), instant responsiveness, and to some extent the (presumption of?) excellent registration (which many AR concepts depend on, but Google have cleverly side-stepped in the clip by largely avoiding such concepts).”
“Focusing at an appropriate distance is possible (I have seen it), – but not in such a tiny piece of hardware (yet!),” she adds. “Even good registration is possible in some situations, – but any specs will be at the mercy of smartphone-like inertial and magnetic sensors (compasses are notorious), unless it can take its cues from the surroundings by image recognition and analysis (which some techs already do surprisingly well).”
Some Things To Consider
Lindsell highlighted some very interesting points about the Google Glasses in a comment she left on another WebProNews article. I’ll repost that comment here:
The glasses -concept- is definitely possible (as is the head tracking). I have seen a number of products that convince me of that – but the sleek designer package probably isn’t (yet).
There are usability thresholds in many areas that such a product will need to meet to be truly useful:
1) Field of View – the Google Glass product seems way to small to provide a useable FOV (no-one is yet aiming high enough here)
2) Brightness – a huge dynamic range is needed, – think about readibility on a sunny day – and brightness takes power
3) Exit pupil – an optical engineering parameter that needs to rate highly or the slightest jiggle of the glasses on your face will rob you of the display
4) Focus – optics will be needed to focus the display at a useable distance
5) Transparency – too opaque and the readouts block out your view (mind that lamp post!) – too transparent and you can’t make out what the marker is saying
6) Zonation and format – you probably -never- want any readouts to appear within your central view area – designing them to appear in the optimum place on the periphery is vital. No large windows please! – prefer conformal indications and markers.
7) Probably more important than all of the above will be the off/standby switch – the default position should be standby – with a quick and easy way to switch ‘on’ while required
Responsiveness and Registration – such a device will be -very- sensitive to delays. A note for OS suppliers!
9) Driving – special case – needs an even more safety-oriented (and accredited) design – but by no means impossible – think HUDs in fast jets
When someone, let’s assume Google for now, first clears all of the above hurdles, then we may have a useable product, although you may not be as keen on it when you see how big the packaging is.
I’m not quick to believe that Google’s sleek, small package is possible. Even then, I am assuming that the device will need to be connected to your smartphone.
Of course it’s always possible that the Google device uses a laser to project the display using one of the eyepieces. That -might- allow a smaller packaging.
The concept of course remains valid, and the gauntlet is well and truly thrown down to all major players, to overcome the challenges.
As for all the different things such a product would be useful for, – I submit that we have only scratched the surface of AR as a whole.
Who would have imagined the WWW when first connecting two computers together (with due credit to Mr Berners-Lee).
AR is a whole new way of teaming technology with people. For that, the technology needs to be -really- people-friendly!
“Many of these parameters will have a threshold level the tech must achieve in order to be useable and acceptable to the consumer market,” Lindsell told us in an email. “ I am not about to nail my colours to the mast on exactly where to call these levels, but suffice to say that whilst many products out there have some way to go, some of them are, as far as I can see, showing signs that they may get there. This is why I think there may be some real tech behind the Google Glass Project. What we don’t know of course is how far along Google are yet. I think the clue is that it is far enough for them to test the market and attempt to manage expectation.”
“Probably not,” she says. “Of course there are a few universities (and even Microsoft) actively researching electronic display contact lenses, but it is still early days yet. There are significant hurdles in terms of how to power them, and even greater ones in terms of how to focus the image at a suitable distance.”
“Producing a picture matrix with sufficient resolution, over a sufficiently wide FOV is also a major challenge, and although I can’t speak for ‘hidden’ projects, I am not aware that we are even within sight of the right ball park yet (apologies for mixed metaphors),” she continues. “But then there again – electronic focus is possible (I have seen it) – though not in a miniature package. Contact lenses –may- seem like they would help with the FOV and form factor problems, but in reality I think they would have to solve those problems, in miniature, first. I think the jury is out on when contact lenses may be able to deliver AR (though I’m thinking 10 years+), although I might predict that in the interim electronic (non-AR) contact lenses may find use as a health sensor.”
We may not know how much of what has been presented in Project Glass is really feasible at this point, but Google’s promo video has clearly generated a lot of enthusiasm. We asked Lindsell if she expects a lot of excitement and involvement from developers as a result.
“I think this is where Google have really scored,” she says. “People sit up and listen when Google speak. It is my firm hope that they will be able to market an attractive product before this interest dies down. And here’s the rub – truly useable AR specs will require –a lot- of engineering, and this needs funding, which means market interest. There’s a chicken and egg situation here – the market is only interested in what is realistically possible (hence your own interest I suspect), – but even organisations with the ability to fund development need to prove there is a strong demand to release those funds, as well as a sense that the end product is truly feasible.”
“There may be some hope,” she adds. “ I have seen demonstrations of many existing AR specs technologies first hand (including Vuzix, Laster, Trivisio, BAe Systems and a few others) and although I have yet to see a single system meet what I might call a people-friendly acceptability factor, I have seen the current state of development of some of the component technologies.”
“This why I think that AR specs will be possible,” she says. “What I am far less sure about, is the final form factor – but even here let’s not rush to judgement, as prototype devices are certain to be clunky and unpalatable, whereas there has been significant R&D and the final package may be acceptable (even if not quite as tiny as Project Glass). How far Google have really got with this, is anyone’s guess, but if they don’t have something up their sleeve, it would have been very brave of them to put about the Project Glass video clip, with such a tiny device – especially for Sergey Brin to be seen wearing them so openly.”
“If there is a secret here my guess would be laser projection (not onto the retina – which would require eye tracking, but creating a virtual image using the eyepiece lens) or possibly a cunning use of novel LED tech (there continues to be much R&D here – think blue LEDs and Shuji Nakamura – there was a wonderful Sci Am article about it a couple of years back),” she says. “By the way – that was the one big elephant in the room I forgot to mention in my earlier list – style. Obviously crucial to the market, and for that reason I would take the Oakley announcement very seriously, although I suspect they would do much better to team up.”
“So yes, I think Google have created a lot of interest – and I just hope they can maintain it long enough to release product,” says Lindsell. “Does Apple have something in the works? My guess would be yes – but it would be ultra hush hush, and I doubt if they will declare it until they are ready, in spite of Google’s announcement. Will they be working harder now in the background, – very probably yes.”
It may be Google that has generated this wave of excitement related to the possibilities of augmented reality, but there are plenty of others working in the space, and it’s entirely possible that we’ll see even more interesting products coming from elsewhere.
“I see many AR technologies emerging,” Lindsell tells us. “From location-based to marker-based services, image recognition and interpretation, object tracking (now in 3D – see metaio), facial recognition (not just face tracking), zoning, fencing, pre and post-visualisation/transformation, on-the-spot translation, sophisticated auditory cues and environments, use of haptics (early days here – much potential), sensory substitution, crowd sourcing in near real time, and even the use of VR in registration with sensor media to provide context. And there are so many ideas that people have yet to have – so much potential in AR yet to be realised. But there are key enabler technologies required first.”
“One of these is the AR specs,” she continues. “I think we are barely scratching the surface of how we might use AR. I really think that AR is the business end of a generational process of taking IT out of the office and conforming it to the user as ‘wearable tech’ that is constantly available to the user.”
“Think of everything that IT enables us to do now,” Lindsell concludes. “Computing was originally seen as wartime code-breaker technology. The cold war space race then helped it come of age (think chicken and egg again) because we needed help with the complexities of pre-launch checks for the hugely complex moon-rockets. Ever since there has been a march towards ? (no-one knows quite what!). All we know is that is that we use IT as an extension of ourselves – almost like add-on modules to help our brains (and occasionally other parts of us). So the real question is one of human and cultural evolution, what would we like some help with, and how can we increase our reach to get it?”
A couple weeks ago, Google captured the imaginations of many with a slick promo video for Project Glass, a futuristic pair of Google glasses that put the capabilities of a smartphone directly into your field of vision. Though Google has been very clear about the video being more concept than reality, in terms of what the glasses can actually do at this stage, the glasses are real. Even Google co-founder Sergey Brin has been wearing them out.
The glasses have been both mocked and praised a great deal since the video was released. There have been numerous parody videos made, but also some more concerning skepticism from augmented reality experts.
We wanted to get some more takes from experts in the field about just how realistic Project Glass, as we’ve seen it presented, really is. We intend to talk to others, but we started with Ogmento President and co-founder Brian Selzer. We talked to him last year about how augmented reality + location = “the holy grail for marketers”. Ogmento itself is an augmented reality gaming company trying to change the way consumers interact with their smartpnones. When we last talked to Selzer, Ogmento had released an iPhone game centered around Paranormal Activity.
First off, we might as well include Google’s original video, in the off chance you haven’t seen it by now. If you’ve seen it, continue on.
“The Project Glass video highlights the use of a HUD eyewear system to showcase data that can be acquired utilizing today’s smartphone technologies (GPS, speech recognition, etc),” he says. “From that standpoint, the technology and information displayed on the screen is certainly possible in a short period of time. The quality and performance of the HUD user experience itself is another matter though, and certainly worthy of a bit of skepticism. It’s coming though.”
“I was not very impressed with the UI/UX design in the Project Glass video,” he adds. “There is a fine line between useful and dangerous, or appealing and annoying. Sometimes less is more.”
The following videos show some potential dangers and annoyances:
“The navigational example worked pretty well, but some of the other examples fell a bit short in answering the question of ‘why’, and will leave a lot of people scratching their heads,” says Selzer.
“Once we start to bring true computer vision into the mix, and the display screen serves up data related more to the people, places and things around us (not just gps), it will become much more interesting, relevant, and perhaps a bit more clear why HUD technology can be so exciting at the mass-market level,” he says.
We recently looked at a presentation given earlier this year by one of the Google Glass engineers. He talked about the possibilities of contact lenses, which could basically act in similar ways to the glasses:
In his presentation, he shared a slide highlighting some key areas that could be impacted: gaming, virtual reality, augmented reality, interfacing with mobile, super vision, night vision, multi-focal electronic contact lenses, and “…” which would represent an infinite number of possibilities I presume.
Speaking of Google contact lenses, we asked Selzer if this would make things more plausible.
“Companies are definitely looking at contact lenses as a solution to help solve issues such as simultaneous focus,” he tells us. “I’m doubtful this is the best solution for mass-market consumption though. I can see this approach being adopted by the military, and perhaps a small group of hardcore gamers, super gadget geeks, etc…”
Personally, I can’t stand having things in my eye, so I tend to agree with the skepticism about mainstream appeal, although, admittedly, the cool factor (if truly cool) could get some of us to reconsider.
I think it’s clear that Google’s Project Glass promo has ignited some major interest in augmented reality technology. We asked Selzer if he expects a lot more developers to get involved with the technology because of the glasses.
“Google was early to step into the AR ring with their Google Goggles computer vision technology,” says Selzer. Google Goggles, if you’re unfamiliar, is a technology that lets users take pictures of things with their phones and get search results based on the image.
“Now with Project Glass it seems they are confirming their commitment to the AR space,” Selzer adds. “They are in a great position to pioneer here, so the fact that Google is now showcasing HUD technology is exciting.”
Some are speculating that Google could show off the glasses at Google I/O, the company’s annual developers conference, which takes place in June. If that turns out to be the case, it should at least get a lot of developers thinking about the possibilities, even if APIs aren’t released to help fuel the creativity.
“Today’s AR is typically a short-burst experience due to having to hold your mobile device up in front of your eyes,” he says. “Optimal or prolonged AR simply begs to be experienced with glasses. Once we have a wearable hands-free solution that works well, the AR industry will see even more growth. For developers looking to stay ahead of the pack, AR is truly an exciting space right now. It’s still very early, and we’re just getting started.”
Even as that may be the case, we’re already seeing some pretty interesting implementations of AR:
“When it comes to augmented reality advancements, both hardware and software continue to evolve at a decent pace,” Selzer says. “Mobile devices, cameras, sensors, display screens.. all continue to advance towards an AR-enabled world.”
“Many companies are investing in the space,” he adds. “Microsoft’s Kinect utilizes a 3D depth-sensing camera that allows for a very rich understanding of the environment. One can imagine some exciting scenarios when this camera technology is brought to mobile devices… we will be able to ‘see’ the world in a whole new way.”
As a matter of fact, we recently looked at a concept video from Microsoft in which they show some pretty interesting ideas, using Kinect.
“Sony’s SmartAR technology shows great promise for large-space AR experiences,” he says. “Qualcomm is leading the way for mobile developers to get their hands on some great computer vision software and start to experiment. Apple has some interesting patents in the space, and it’s only a matter of time before they wow us. Overall, there continues to be exciting advancements in AR as more and more large companies and professionals focus in this space.”
“We will run to stay fit by collecting Pac-Man pellets along the actual road, or by racing to avoid a pack of zombies,” he says.
Sounds a lot better than Wii fit:
“We will look at the landscape around us and understand its history and significance instantly,” says Selzer.
Google would have a major edge up in that department with things like Google Earth, Google Maps, Street View, Sketchup, etc. APIs, would be the key though. With developers turned loose on this stuff, I wonder how many people would spend more time in alternate realities than in the reality we currently reside in.
“We will never forget the name of that person in our social network again when we run into them at a party,” he says. “The Google glasses video just scratches the surface of such potential. We’re just getting started here.”
“Today, we love our smartphone devices — so much so, we bump into each other because we are glued to the screens and forget to look where we’re going,” Selzer adds. “Tomorrow we will love our wearable devices — seamlessly integrated, allowing us to look up and still remain connected.”
Hopefully it goes better than some of the Project Glass parodies we looked at above.
“The coolest gadgets will be the ones that are invisible, or a part of our everyday attire,” Selzer concludes. “Just a natural part of us as we go about our lives.”
Google certainly isn’t the only one working on wearable technology, by any means. Look at what Oakley’s doing. There are rumors that Apple and Valve may be working together on something. Expect to see more of this kind of stuff emerging in the near future. Next year’s Consumer Electronics Show should be an interesting one.
What do you think about the Google Glasses? Augmented reality in general? Let us know in the comments.