Google’s Niantic Project (which we first reported on here) is now starting to generate a little more attention. On Wednesday, we published “Google, This Is Getting Pretty Weird [The Niantic Project]” looking at some of the latest pieces of the mystery. The article is generating some discussion in the comments section, as well as on reddit.
For more background read the stories linked above.
Some believe it to be a viral campaign (possibly for Google’s Field Trip app or something related), and others say it is an ARG (alternate reality game). While it could be both, it is definitely a game
If it is simply a viral campaign for Field Trip, it seems like odd timing since the app was launched in September, though the Niantic Project was clearly conceived before that, as the video upload dates indicate (as well as the Twitter account, which was started in July), but videos were not released that early. It also seems odd that if this were a viral campaign for Field Trip, there would be more of a related theme, as opposed to this paranoia concept that accompanies the Niantic Project. Also, why would Google use the “Niantic” brand and not the “Field Trip” brand?
Reddit user rahmad says it’s a game, and points to a related wiki:
It’s an ARG, not sure viral campaign is the right connection, since at this point there is no ‘thing’, just the ARG. it could basically ‘be’ the product, in that it’s an alternate reality game. There are forum posts at unforum (arg community) and facepunch (general community) trying to solve some of the puzzles integrated in it. Players have a wiki up at googlearg.schlarp.com where most of the solves are being documented, and there’s also a link there to an IRC channel if you want to join us. There’s about 12-15 odd folks hanging in IRC all day tracking new content and piecing together the story.The storyline is 6 (maybe 7) days in at this point, so while we know a lot, my guess is there’s a shit-ton still unrevealed…
Here’s what you get at that link under “What is the Niantic Project?”:
At Comic-Con 2012, several videos were released involving a long haired comic book artist named Tycho. Although there was no record of the individual anywhere else, everyone seemed to know him. People came to Comic-Con to find Tycho. Big names in the business praised his amazing skill and describe him as someone who, “Whenever a franchise is lagging, they bring him in behind the scenes… And the franchise starts selling again.” Past employers seem to include Marvel, Batman Beyond, and more. These videos went widely unnoticed.
Flash forward to November of this year, new YouTube videos go out. These are a trailhead to a larger investigation. The Niantic Project, Shapers, the NIA, XM Fields, what does it all mean? This is the chronicle of our investigation. We’d like for you to help.
Out of Universe – The organization behind the leaks is NianticLabs, a division of Google whose only other reported project to date is their Field Trip app. This arg seems completely unrelated to Field Trip, and its size and complexity leads us to speculate that it is for a completely new product, which may or may not share the emphasis on geolocation.
The wiki even has a whole cast of characters page, including this Tycho and Ben Jackland (the guy with the “malfunctioning phone” from the videos). It also has various resources for players, including images, videos, audio, chatlogs, documents, apps, words of the day, thoughts of the day, and websites of note.
Google’s Niantic Labs recently launched the Field Trip App for Android. The company described it as “your guide to the cool, hidden, and unique things in the world around you.”
Here’s a refresher:
Basically, it shows you stuff that it thinks you’ll be interested in when you’re near something that fits the bill. The Atlantic ran a pretty interesting piece about it last week, talking with John Hanke, the former Google Maps chief, who now runs Niantic Labs inside of Google.
Now, the “Niantic Project” has appeared as some kind of mysterious attention-getter. Googlers like John Mueller are sharing this Google+ post:
Here’s what the voice in the video says if you don’t feel like watching it:
“There’s more to the world than we can truly see. You sensed it, but you cannot tell. Something is very wrong. Strange occurrences, visions affecting us. Are we being watched? I’m a truthseeker with many questions. The most important is : What is the Niantic Project?”
You’ll also find this video of a guy getting a “glitchy phone” from an online auction, which makes his phone show something when he points it at statues.
The site has a “word of the day”:
And an XM Study:
In the comments of the Google+ post, one person asks if the project is Glass related. Hanke did mention Glass in the Atlantic interview, but the phone video suggests it’s more of a software-related thing (which could mean phones and Glass).
“You’ve got things like Google Glass coming,” Hanke said in the article. “And one of the things with Field Trip was, if you had [Google Glass], what would it be good for? Part of the inspiration behind Field Trip was that we’d like to have that Terminator or Iron Man-style annotation in front of you, but what would you annotate?”
Another person in the Google+ comments suggests that the Niantic Project could be just a viral marketing campaign for Field Trip, which is certainly a possibility. Either way, it’s almost definitely augmented reality-related.
Here’s the Niantic Project Twitter timeline in realtime, featuring tweets dating back to July:
Microsoft and Nokia are holding a big event today to show off the future of the Windows Phone platform. We’re expecting to see some new hardware from Nokia’s impressive Lumia line. The mobile device manufacturer has also been busy on the software side of things lately. They recently announced Nokia Music, a free streaming service, but a new partnership with AOL has yielded an even more impressive app.
Say hello to Entrance, AOL’s new, and rather impressive, app for Nokia Lumia handsets. The app was built as part of a partnership between AOL and Nokia that will see AOL’s stable of entertainment focused properties, like Moviefone and AOL Music, combined into a single app.
At first glance, Entrance seems like any other entertainment hub app. That would be doing the app a disservice, however, as its augmented reality feature is a game changer. Holding up the Nokia Lumia will display all the theaters in your vicinity alongside a list of the films being shown at each location. It also lists when the next showtime for each movie is alongside the average price of a ticket. It’s by far one of the most impressive uses of augmented reality that I’ve seen.
“Entrance by AOL leverages the depth of AOL’s content, whilst demonstrating the unique and
differentiated experiences partners can bring to Nokia Lumia smartphones. With innovative
features such as Augmented Reality, personalized and contextual Live Tiles and beautiful app
design, enabled through the rich Windows Phone UI, we believe Nokia consumers will love this
exclusive one stop shop for entertainment.” said Mark Fletcher, Director, Global Partnering &
Application Development, Nokia.
Entrance is only going to be available on the Nokia Lumia handsets, which are exclusively powered by Windows Phone. The partnership seems indicative of a larger partnership between AOL and Microsoft, but Sol Lipman, VP of Mobile First for AOL, told WebProNews that that is not the case. He says that that AOL is “exploring how to best leverage the Windows 8 platform and our content.” He also said that that Microsoft’s “distributions channels are of interest to AOL.”
Besides the impressive augmented reality, Entrance is playing around with the idea of interlinked content. The app brings together movies, music and TV in a way that connects it all based on actors, news and even soundtracks. AOL uses the example of looking up a movie and then being able to listen to the soundtrack right there. Of course, users can also buy and download the soundtrack if they so wish.
Combining augmented reality and interlinked content into a single app is already an impressive feat, but AOL has higher aspirations. Lipman told us that they intend to create “the ESPN for entertainment.” It’s a lofty goal as most consumers use multiple apps and sources for all their entertainment needs. Apple has come really close to being that with iTunes, but there’s nothing that covers the wide breadth of entertainment like ESPN does with sports. AOL might just be the first do that.
AOL’s Entrance seems like it’s from the future, but it’s actually built with Windows Phone 7 in mind. Anybody who owns a Windows Phone that’s outfitted with Windows Phone 7.5 can download Entrance right now. AOL has built it to be forward compatible with Windows Phone 8, but I can imagine them adding some exclusive features to take advantage of all the new goodies in Windows Phone 8.
I never thought I would see the day when Windows Phone gets a killer app, but AOL seems to have created that app. I expect Google and Apple to follow suit with their own similar apps in the near future. We’ll find out later today if Microsoft and Nokia can keep the momentum going in Windows Phone’s favor.
Hatsune Miku is an incredibly popular “singing synthesizer application with a female persona,” or, virtual pop star for short. The Japanese creation is voiced by a popular voice actress and actually performs concerts, live, as a projection.
So, I guess it’s understandable that someone could take a particular liking to the character.
But this, my friends, is forever alone level 99. It’s not 100, because there is still something pretty neat about it.
Basically, a guy has turned Miss Miku into his girlfriend – or at least made her go on a date with him. The video shows the two taking a nice stroll through the park, and finally ending with some… inappropriate touching that leads you to believe that there will be more to this date once the cameras are off.
Check it out below:
Based on the video, and a little bit of translation we can see that the “date” was made possible by an ASUS Xtion (Kinect-like device) and some modified video goggles. A bit creepy? Sure. Fascinating use of technology? Absolutely.
This morning we brought you the story of Andrew Ng and the 16,000-core neural network that was able to decipher on its own what a cat is from thousands of pictures taken from YouTube videos. This afternoon, Google has chimed in and provided some background on why it is funding machine learning research.
Over on the Google Official Blog, Google Fellow Jeff Dean and Andrew Ng, director of Stanford’s artificial intelligence lab, have outlined how they believe their research will benefit both Google and the world. From the blog post:
You probably use machine learning technology dozens of times a day without knowing it—it’s a way of training computers on real-world data, and it enables high-quality speech recognition, practical computer vision, email spam blocking and even self-driving cars. But it’s far from perfect—you’ve probably chuckled at poorly transcribed text, a bad translation or a misidentified image. We believe machine learning could be far more accurate, and that smarter computers could make everyday tasks much easier. So our research team has been working on some new approaches to large-scale machine learning.
It’s easy to see what applications better machine learning could have for Google. Better voice recognition, image search, or even regular search would certainly help the company’s core business. A more “intelligent” could also help move Google past the treadmill of its Algorithm updates. Perhaps even an “intelligent” info overlay using Google Glass is in the future.
What Ng and the engineers at Google X are doing is creating neural networks that can teach itself to recognize objects using unlabeled data. Currently, machines are taught using content labeled “cat,” for example. The Google X experiment built a large neural network that simulates small-scale human brain architecture. The Google X team wanted to know what it would recognize after showing it YouTube videos for a week.
Our hypothesis was that it would learn to recognize common objects in those videos. Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of… cats. Remember that this network had never been told what a cat was, nor was it given even a single image labeled as a cat. Instead, it “discovered” what a cat looked like by itself from only unlabeled YouTube stills. That’s what we mean by self-taught learning.
The picture above represents what the computers consider a cat to be. Dean and Ng stated that the real goal of the project is to develop machine learning systems that are scalable so that “vast sets of unlabeled training data” (such as the internet, perhaps?) can be accurately classified. The results of the cat experiment will be presented this week at the International Conference on Machine Learning in Edinburgh, Scotland.
[UPDATE]Google has chimed in on the results of the experiment and why it thinks machine learning is important for its future.
[ORIGINAL]
Google X is the secretive inner-lab where Google engineers try to make their wildest science fiction dreams come true. Two of the most famous recent projects to be announced publicly from Google X are the self-driving cars Google is now testing throughout the country and the recently announced Google Glass augmented reality display headset. Those technologies have been announced, but there are no doubt other, even more high-tech projects being kept secret.
The lid was lifted on another Google X project this week when Andrew Ng, the director of Stanford’s artificial intelligence Lab, told the New York Times about his recent experiments in machine learning with Google. Ng and Google engineers have created one of the largest artificial neural networks in the world, with 16,000 connected processors. Ng recently gave the network an interesting task: find a cat on YouTube.
Ng is an expert on machine learning, a branch of artificial intelligence research concerned with developing learning algorithms. Using state-of-the art machine learning techniques, the Google X neural network was able to teach itself what a cat looks like using 10 million images from YouTube videos. Ng stated that the result of the simulation surprised researchers, as the network was not ever told what a cat was.
Ng and other researchers will be presenting the results of the simulation later this week at the International Conference on Machine Learning. In addition to his machine learning research and teaching at Stanford, Ng is one of the co-founders of Coursera, the free online university that offers classes from professors at universities such as Stanford, the University of Michigan, and Princeton. If you are interested in just how the artificial neural network was able to identify a feline, take a look at the video below. In it, Ng explains to the 2011 Bay Area Vision Meeting the concepts behind the technology that was used to accomplish the feat:
Can’t wait for the Google Glass headsets to arrive? A company called Vergence Labs has some eyewear that just might tide you over until Google Glass is finally out of beta.
The company has started a Kickstarter campaign to raise $50,000 for what they claim are the world’s first electric-powered sunglasses. The basic pair of sunglasses the startup touts turns the glasses from shades to normal glasses with the flick of a switch. In addition, the specs have a 720p video camera on the bridge of the frames, allowing them to record a user’s-eye view to a microSD card. Kickstarter pledgers who pledge $199 dollars or more will get their very own electric sunglasses. The glasses currently only come in large, blocky frames. Luckily for Vergence, that style is currently popular, though the company is still working to miniaturize the components to fit them into smaller frames.
Vergence hopes users of the electric glasses will upload videos they take to their future website, YouGen.Tv. The company suggests that users would be able to share their view of life and help educate the world on what they see. In reality, though, if these glasses take off as much as Vergence’s founders hope, they are going to be dealing with a huge chatroulette-type problem.
Aside from the basic model of electric sunglasses, Vergence is raising money for it’s “immersive reality” visor glasses, seen above. The visor, which looks like something a fighter-pilot would wear, displays an overlay that can be programmed to interact with elements from the visor’s view. This is the same type of augmented reality that Google Glass promised with its announcement trailer, and it can be had for pledging $7,000 to the Kickstarter campaign. The device is only shown running some face-recognition software, but one Vergence co-founder claims the company hopes to implement gestural interfaces in the near future.
That future had better be very soon, as the Kickstarter campaign estimates that both the electric sunglasses and the augmented-reality visor are due to ship before Christmas of this year. That gives the Vergence just over seven months to perfect something that appears to still be an early prototype. Watch the founders of Vergence Labs pitch their products in the video below, and decide for yourself if their ideas could be implemented by December.
We’ve all seen the parodies. More than we can count, at this point. Still, all the attention Google’s Glass project has gotten can be interpreted as giddy excitement for the cyborg-vision we’ve all wanted since childhood. It’s sad, then, to learn that the current sets of Glass glasses that have been shown off by Sergey Brin and Sebastian Thrun are nowhere near what the announcement video for the project made them seem to be.
Rafe Needleman, a writer for CNET, points out that Google Glass will only display information on the edge of a person’s view. Specifically, information is provided at the top of a person’s line of sight. Needleman cites a Google spokesperson who isn’t Vic Gundotra, Google’s senior vice president of social, as confirming the news. He reports being told that it is too early to even know what functions the devices will have or what type of UI they will use.
It’s comforting to know that text messages won’t be popping up in front of our faces as we walk down the street. All the same, it’s disappointing that the future-world where an augmented, virtual worldview is provided and layered over every real-life object is still years away. The prototypes, it turns out, are actually prototypes. Sometimes Google can be such a tease.
A big theme in the development scene these days is how we can make creation via code easier and more fluid for the average user. Not everybody has the time, patience or education to learn a programming language, let alone the two or three that are required to create an app or software program. There have been some strides lately with The App Builder and Game Maker, but a lot of development is still firmly out of reach.
Augmented reality company metaio doesn’t like that so they set off to do something about it. The company’s tools contained in the metaio Mobile SDK have been a huge hit for developers wanting to create AR experiences on mobile devices. It has seen over 10,000 downloads with 1,000 active developers. There are almost 200 applications out there now built on their Mobile SDK with 900 currently in development.
“The response from the developer community was phenomenal,” said metaio’s CTO Peter Meier. “It however inspired us to offer software that would put this technology in the hands of designers, content providers, publishers, and the average users who might not have extensive programming knowledge.”
That’s why the company is now introducing the metaio Creator. It’s aimed squarely at “non-developers, designers and people who have the desire to create amazing AR experiences but with to do so without using a hard-coding environment.” The Creator is compatible with junaio, a mobile AR browser, and can be used to quickly create cloud-based AR content for Android and iOS devices.
In an even more exciting news, the next update for junaio will introduce HTML and JavaScript support, so developers will be able to expand their AR applications to do even more. Looking even further ahead, the metaio Creator will allow users to upload their creations to all of metaio’s software including the Mobile SDK.
“The most important thing for metaio is to cultivate development and content creation for the entire community,” said Head of US Marketing Trak Lord. “Programmers, users, designers, artists – everyone should be able to live in the Augmented City.”
From May 8 through 21, the metaio Creator will be 30 percent off. The price is still a pretty hefty $227, but it may be worth it if it delivers on its promise of advanced AR creation without the need for expansive programming knowledge. There is also a demo available for you to try out.
Google got a lot of people thinking when it unveiled Project Glass and its promo video showing how Google glasses of the future could work. It was cool. Plain and simple.
Of course as they often aren’t, things aren’t really that simple. It was a concept video, and few truly know what Google’s glasses in their current state can really do, or even how close Google really is to the currently fictional reality portrayed in the video. Experts in the field of augmented reality have expressed a fair amount of doubt, though there is still plenty of excitement coming from them as well.
AR technology concept broker/analyst Marianne Lindsell tells WebProNews, “Like the ‘Stark HUD’ concept (produced as far I can see as a sort of teaser for the Iron Man II film) I do suspect that the Google Project Glass video has a strong ‘Hollywood’ element.”
“I would guess that Google are both testing the market and managing expectation,” she says. “However also like Stark HUD, there has clearly been some use of technology in the production – and in the case of Project Glass, the tech/Hollywood ratio is I suspect much higher, if less than 100%.”
“The least realistic parts of the Google Glass video clip in my opinion are the field of view (a large FOV is needed – but can such a small device provide it?), the brightness (possibly – but there are some good techs out there), instant responsiveness, and to some extent the (presumption of?) excellent registration (which many AR concepts depend on, but Google have cleverly side-stepped in the clip by largely avoiding such concepts).”
“Focusing at an appropriate distance is possible (I have seen it), – but not in such a tiny piece of hardware (yet!),” she adds. “Even good registration is possible in some situations, – but any specs will be at the mercy of smartphone-like inertial and magnetic sensors (compasses are notorious), unless it can take its cues from the surroundings by image recognition and analysis (which some techs already do surprisingly well).”
Some Things To Consider
Lindsell highlighted some very interesting points about the Google Glasses in a comment she left on another WebProNews article. I’ll repost that comment here:
The glasses -concept- is definitely possible (as is the head tracking). I have seen a number of products that convince me of that – but the sleek designer package probably isn’t (yet).
There are usability thresholds in many areas that such a product will need to meet to be truly useful:
1) Field of View – the Google Glass product seems way to small to provide a useable FOV (no-one is yet aiming high enough here)
2) Brightness – a huge dynamic range is needed, – think about readibility on a sunny day – and brightness takes power
3) Exit pupil – an optical engineering parameter that needs to rate highly or the slightest jiggle of the glasses on your face will rob you of the display
4) Focus – optics will be needed to focus the display at a useable distance
5) Transparency – too opaque and the readouts block out your view (mind that lamp post!) – too transparent and you can’t make out what the marker is saying
6) Zonation and format – you probably -never- want any readouts to appear within your central view area – designing them to appear in the optimum place on the periphery is vital. No large windows please! – prefer conformal indications and markers.
7) Probably more important than all of the above will be the off/standby switch – the default position should be standby – with a quick and easy way to switch ‘on’ while required
Responsiveness and Registration – such a device will be -very- sensitive to delays. A note for OS suppliers!
9) Driving – special case – needs an even more safety-oriented (and accredited) design – but by no means impossible – think HUDs in fast jets
When someone, let’s assume Google for now, first clears all of the above hurdles, then we may have a useable product, although you may not be as keen on it when you see how big the packaging is.
I’m not quick to believe that Google’s sleek, small package is possible. Even then, I am assuming that the device will need to be connected to your smartphone.
Of course it’s always possible that the Google device uses a laser to project the display using one of the eyepieces. That -might- allow a smaller packaging.
The concept of course remains valid, and the gauntlet is well and truly thrown down to all major players, to overcome the challenges.
As for all the different things such a product would be useful for, – I submit that we have only scratched the surface of AR as a whole.
Who would have imagined the WWW when first connecting two computers together (with due credit to Mr Berners-Lee).
AR is a whole new way of teaming technology with people. For that, the technology needs to be -really- people-friendly!
“Many of these parameters will have a threshold level the tech must achieve in order to be useable and acceptable to the consumer market,” Lindsell told us in an email. “ I am not about to nail my colours to the mast on exactly where to call these levels, but suffice to say that whilst many products out there have some way to go, some of them are, as far as I can see, showing signs that they may get there. This is why I think there may be some real tech behind the Google Glass Project. What we don’t know of course is how far along Google are yet. I think the clue is that it is far enough for them to test the market and attempt to manage expectation.”
“Probably not,” she says. “Of course there are a few universities (and even Microsoft) actively researching electronic display contact lenses, but it is still early days yet. There are significant hurdles in terms of how to power them, and even greater ones in terms of how to focus the image at a suitable distance.”
“Producing a picture matrix with sufficient resolution, over a sufficiently wide FOV is also a major challenge, and although I can’t speak for ‘hidden’ projects, I am not aware that we are even within sight of the right ball park yet (apologies for mixed metaphors),” she continues. “But then there again – electronic focus is possible (I have seen it) – though not in a miniature package. Contact lenses –may- seem like they would help with the FOV and form factor problems, but in reality I think they would have to solve those problems, in miniature, first. I think the jury is out on when contact lenses may be able to deliver AR (though I’m thinking 10 years+), although I might predict that in the interim electronic (non-AR) contact lenses may find use as a health sensor.”
We may not know how much of what has been presented in Project Glass is really feasible at this point, but Google’s promo video has clearly generated a lot of enthusiasm. We asked Lindsell if she expects a lot of excitement and involvement from developers as a result.
“I think this is where Google have really scored,” she says. “People sit up and listen when Google speak. It is my firm hope that they will be able to market an attractive product before this interest dies down. And here’s the rub – truly useable AR specs will require –a lot- of engineering, and this needs funding, which means market interest. There’s a chicken and egg situation here – the market is only interested in what is realistically possible (hence your own interest I suspect), – but even organisations with the ability to fund development need to prove there is a strong demand to release those funds, as well as a sense that the end product is truly feasible.”
“There may be some hope,” she adds. “ I have seen demonstrations of many existing AR specs technologies first hand (including Vuzix, Laster, Trivisio, BAe Systems and a few others) and although I have yet to see a single system meet what I might call a people-friendly acceptability factor, I have seen the current state of development of some of the component technologies.”
“This why I think that AR specs will be possible,” she says. “What I am far less sure about, is the final form factor – but even here let’s not rush to judgement, as prototype devices are certain to be clunky and unpalatable, whereas there has been significant R&D and the final package may be acceptable (even if not quite as tiny as Project Glass). How far Google have really got with this, is anyone’s guess, but if they don’t have something up their sleeve, it would have been very brave of them to put about the Project Glass video clip, with such a tiny device – especially for Sergey Brin to be seen wearing them so openly.”
“If there is a secret here my guess would be laser projection (not onto the retina – which would require eye tracking, but creating a virtual image using the eyepiece lens) or possibly a cunning use of novel LED tech (there continues to be much R&D here – think blue LEDs and Shuji Nakamura – there was a wonderful Sci Am article about it a couple of years back),” she says. “By the way – that was the one big elephant in the room I forgot to mention in my earlier list – style. Obviously crucial to the market, and for that reason I would take the Oakley announcement very seriously, although I suspect they would do much better to team up.”
“So yes, I think Google have created a lot of interest – and I just hope they can maintain it long enough to release product,” says Lindsell. “Does Apple have something in the works? My guess would be yes – but it would be ultra hush hush, and I doubt if they will declare it until they are ready, in spite of Google’s announcement. Will they be working harder now in the background, – very probably yes.”
It may be Google that has generated this wave of excitement related to the possibilities of augmented reality, but there are plenty of others working in the space, and it’s entirely possible that we’ll see even more interesting products coming from elsewhere.
“I see many AR technologies emerging,” Lindsell tells us. “From location-based to marker-based services, image recognition and interpretation, object tracking (now in 3D – see metaio), facial recognition (not just face tracking), zoning, fencing, pre and post-visualisation/transformation, on-the-spot translation, sophisticated auditory cues and environments, use of haptics (early days here – much potential), sensory substitution, crowd sourcing in near real time, and even the use of VR in registration with sensor media to provide context. And there are so many ideas that people have yet to have – so much potential in AR yet to be realised. But there are key enabler technologies required first.”
“One of these is the AR specs,” she continues. “I think we are barely scratching the surface of how we might use AR. I really think that AR is the business end of a generational process of taking IT out of the office and conforming it to the user as ‘wearable tech’ that is constantly available to the user.”
“Think of everything that IT enables us to do now,” Lindsell concludes. “Computing was originally seen as wartime code-breaker technology. The cold war space race then helped it come of age (think chicken and egg again) because we needed help with the complexities of pre-launch checks for the hugely complex moon-rockets. Ever since there has been a march towards ? (no-one knows quite what!). All we know is that is that we use IT as an extension of ourselves – almost like add-on modules to help our brains (and occasionally other parts of us). So the real question is one of human and cultural evolution, what would we like some help with, and how can we increase our reach to get it?”
A couple weeks ago, Google captured the imaginations of many with a slick promo video for Project Glass, a futuristic pair of Google glasses that put the capabilities of a smartphone directly into your field of vision. Though Google has been very clear about the video being more concept than reality, in terms of what the glasses can actually do at this stage, the glasses are real. Even Google co-founder Sergey Brin has been wearing them out.
The glasses have been both mocked and praised a great deal since the video was released. There have been numerous parody videos made, but also some more concerning skepticism from augmented reality experts.
We wanted to get some more takes from experts in the field about just how realistic Project Glass, as we’ve seen it presented, really is. We intend to talk to others, but we started with Ogmento President and co-founder Brian Selzer. We talked to him last year about how augmented reality + location = “the holy grail for marketers”. Ogmento itself is an augmented reality gaming company trying to change the way consumers interact with their smartpnones. When we last talked to Selzer, Ogmento had released an iPhone game centered around Paranormal Activity.
First off, we might as well include Google’s original video, in the off chance you haven’t seen it by now. If you’ve seen it, continue on.
“The Project Glass video highlights the use of a HUD eyewear system to showcase data that can be acquired utilizing today’s smartphone technologies (GPS, speech recognition, etc),” he says. “From that standpoint, the technology and information displayed on the screen is certainly possible in a short period of time. The quality and performance of the HUD user experience itself is another matter though, and certainly worthy of a bit of skepticism. It’s coming though.”
“I was not very impressed with the UI/UX design in the Project Glass video,” he adds. “There is a fine line between useful and dangerous, or appealing and annoying. Sometimes less is more.”
The following videos show some potential dangers and annoyances:
“The navigational example worked pretty well, but some of the other examples fell a bit short in answering the question of ‘why’, and will leave a lot of people scratching their heads,” says Selzer.
“Once we start to bring true computer vision into the mix, and the display screen serves up data related more to the people, places and things around us (not just gps), it will become much more interesting, relevant, and perhaps a bit more clear why HUD technology can be so exciting at the mass-market level,” he says.
We recently looked at a presentation given earlier this year by one of the Google Glass engineers. He talked about the possibilities of contact lenses, which could basically act in similar ways to the glasses:
In his presentation, he shared a slide highlighting some key areas that could be impacted: gaming, virtual reality, augmented reality, interfacing with mobile, super vision, night vision, multi-focal electronic contact lenses, and “…” which would represent an infinite number of possibilities I presume.
Speaking of Google contact lenses, we asked Selzer if this would make things more plausible.
“Companies are definitely looking at contact lenses as a solution to help solve issues such as simultaneous focus,” he tells us. “I’m doubtful this is the best solution for mass-market consumption though. I can see this approach being adopted by the military, and perhaps a small group of hardcore gamers, super gadget geeks, etc…”
Personally, I can’t stand having things in my eye, so I tend to agree with the skepticism about mainstream appeal, although, admittedly, the cool factor (if truly cool) could get some of us to reconsider.
I think it’s clear that Google’s Project Glass promo has ignited some major interest in augmented reality technology. We asked Selzer if he expects a lot more developers to get involved with the technology because of the glasses.
“Google was early to step into the AR ring with their Google Goggles computer vision technology,” says Selzer. Google Goggles, if you’re unfamiliar, is a technology that lets users take pictures of things with their phones and get search results based on the image.
“Now with Project Glass it seems they are confirming their commitment to the AR space,” Selzer adds. “They are in a great position to pioneer here, so the fact that Google is now showcasing HUD technology is exciting.”
Some are speculating that Google could show off the glasses at Google I/O, the company’s annual developers conference, which takes place in June. If that turns out to be the case, it should at least get a lot of developers thinking about the possibilities, even if APIs aren’t released to help fuel the creativity.
“Today’s AR is typically a short-burst experience due to having to hold your mobile device up in front of your eyes,” he says. “Optimal or prolonged AR simply begs to be experienced with glasses. Once we have a wearable hands-free solution that works well, the AR industry will see even more growth. For developers looking to stay ahead of the pack, AR is truly an exciting space right now. It’s still very early, and we’re just getting started.”
Even as that may be the case, we’re already seeing some pretty interesting implementations of AR:
“When it comes to augmented reality advancements, both hardware and software continue to evolve at a decent pace,” Selzer says. “Mobile devices, cameras, sensors, display screens.. all continue to advance towards an AR-enabled world.”
“Many companies are investing in the space,” he adds. “Microsoft’s Kinect utilizes a 3D depth-sensing camera that allows for a very rich understanding of the environment. One can imagine some exciting scenarios when this camera technology is brought to mobile devices… we will be able to ‘see’ the world in a whole new way.”
As a matter of fact, we recently looked at a concept video from Microsoft in which they show some pretty interesting ideas, using Kinect.
“Sony’s SmartAR technology shows great promise for large-space AR experiences,” he says. “Qualcomm is leading the way for mobile developers to get their hands on some great computer vision software and start to experiment. Apple has some interesting patents in the space, and it’s only a matter of time before they wow us. Overall, there continues to be exciting advancements in AR as more and more large companies and professionals focus in this space.”
“We will run to stay fit by collecting Pac-Man pellets along the actual road, or by racing to avoid a pack of zombies,” he says.
Sounds a lot better than Wii fit:
“We will look at the landscape around us and understand its history and significance instantly,” says Selzer.
Google would have a major edge up in that department with things like Google Earth, Google Maps, Street View, Sketchup, etc. APIs, would be the key though. With developers turned loose on this stuff, I wonder how many people would spend more time in alternate realities than in the reality we currently reside in.
“We will never forget the name of that person in our social network again when we run into them at a party,” he says. “The Google glasses video just scratches the surface of such potential. We’re just getting started here.”
“Today, we love our smartphone devices — so much so, we bump into each other because we are glued to the screens and forget to look where we’re going,” Selzer adds. “Tomorrow we will love our wearable devices — seamlessly integrated, allowing us to look up and still remain connected.”
Hopefully it goes better than some of the Project Glass parodies we looked at above.
“The coolest gadgets will be the ones that are invisible, or a part of our everyday attire,” Selzer concludes. “Just a natural part of us as we go about our lives.”
Google certainly isn’t the only one working on wearable technology, by any means. Look at what Oakley’s doing. There are rumors that Apple and Valve may be working together on something. Expect to see more of this kind of stuff emerging in the near future. Next year’s Consumer Electronics Show should be an interesting one.
What do you think about the Google Glasses? Augmented reality in general? Let us know in the comments.
Remember Solve For X – that Google-hosted event earlier this year, where smart people gathered to discuss technology and solutions for real world problems?
One of the presentations was from Babak Parviz. That presentation was about building microystems on the eye. While fascinating in its own right, it is even more fascinating now that we’ve seen Google’s promo for Project Glass – especially considering that Parviz is an engineer on the Project Glass team.
Here’s Parviz’s Solve For X presentation:
The premise: What if we packed contact lenses with tiny devices?
“So, our idea is that if you could make a contact lens that could have sensors and do an analysis on the surface of the eye, and report the results back wirelessly, we may be able to get a sense of what happens inside people’s bodies without actually going inside people’s bodies,” says Parviz.
He’s talking about the applications of such technology to health care (one area where others have expressed great intrigue with regards to Project Glass).
“One good thing about contact lenses is that we already know that more than 120 million people wear them, and many people have been wearing them for decades,” he says. “So that’s a good interface to the body. That’s an acceptable interface to the body.”
“This can, in principle, provide a radically new interface with the human body, really, probably for the first time, enabling continuous monitoring of the person’s health and collecting data,” he says.
But what if this went beyond just health care? How far could this go? Obviously Project Glass isn’t about health care, based on the promo (though again, there are certainly possibilities for health care and many other applications, should developers gain access to the relevant APIs).
Parvis actually goes beyond health care in his talk. “We’ve been toying with the idea of, what happens if you put a display on your contact lens, so what if I could make a contact lens that could show me information, and it would talk to my cell phone, and the cell phone would talk to the tower and the cloud, eventually, and enable some at least, level of visual interaction for the person who’s wearing this.”
He shows a slide with the following bullet points:
Gaming
Virtual Reality
Augmented Reality
Interfacing with mobile
Supervision?
Night vision?
Multi-focal electronic contact lenses
…
Contact lenses or no, you can think about the possibilities for Project Glass based on that list. The “…” may somehow be even the most fascinating part. Just consider the apps that have been created for smartphones.
Wired interviewed a couple of augmented reality experts, who seem to think that Google can’t replicate the experience represented in the Project Glass promo video with the display shown in the video and photographs that have been made publicly available. Taking this into consideration, one may wonder if the contact lens approach would be the solution. Given that a Project Glass team member has experience in this area, and is responsible for the above presentation, it really doesn’t seem far-fetched that Google could unveil such a thing somewhere down the line.
With regards to augmented reality, specifically, Parviz says in the presentation, “Whether this is possible to implement on a contact lens in a short time or not, my answer is not a short time. But the prospects are there. So we can actually enrich what people normally see with extra layers of data as they go about their daily lives.”
“There’s another aspect of this, if you could someday put a display in a contact lens,” he notes. “And that is, fundamentally, we don’t need lots of displays. So if I thnk about my daily routine, I wake up in the morning, I look at my watch, I look at my smartphone. These are different screens. I may watch some TV. I drive my car. It has a dashboard. I go to work. I use my laptop. There are lots of different screens during the day that I interact with, including billboards. But what all those things do is put something on my retina. So I don’t really need all of those. I just need one display that’s personal to me–maybe it’s in the form of a contact lens–that shows me the relevant information.”
We know the glasses exist. Co-founder Sergey Brin was photographed wearing them at a dinner party. But the possibility of them existing the way they do in Google’s video may be a different story.
For the record, Google has not really presented the whole thing as much more than a concept at this point. The video itself has “One Day…” in the title.
The Verge spoke with Brin after the party, and according to the report, Brin said they’re very much in the early prototype stage, that “right now you really just see it reboot,” and to “give us time”.
It seems at the very least, they’re nowhere near what the video depicts at this time.
Wired has a very interesting report, talking to some Augmented Reality experts, however, who cast some doubt about whether it’s even possible for Google to emulate what actually takes place in the video. Roberto Baldwin reports:
However, according to Pranav Mistry, an MIT Media Lab researcher and one of the inventors of the SixthSense wearable computing system, “The small screen seen in the photos cannot give the experience the video is showing.”
Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, concurs: “You could not do AR with a display like this. The small field of view, and placement off to the side, would result in an experience where the content is rarely on the display and hard to discover and interact with. But it’s a fine size and structure for a small head-up display.”
Mistry does point out that the Project Glass demo is a concept video. But MacIntyre believes Google may have set the bar too high for itself. “In one simple fake video,” MacIntyre told Wired, “Google has created a level of over-hype and over-expectation that their hardware cannot possibly live up to.”
It wouldn’t be the first time Google created hype that it didn’t live up to. That’s for sure (Buzz and Wave come immediately to mind), though on the other hand, they are responsible for this.
Being a prototype, it’s entirely possible that the glasses will look nothing like they do in the promo. In fact, it doesn’t seem out of the realm of possibility that a contact lens version of Project Glass could make an appearance one day.
As Martin LaMonica at CNET points out, one of the engineers on the Project Glass team, Babak Parviz, has experience in Augmented Reality and contact lenses.
In fact, Parviz recently gave a fascinating presentation about contact lenses, that you need to watch.
Think about that for a minute. Would you put a Google product, physically in your eye?
Google unveiled Project Glass on Wednesday. It’s still in the concept/testing stage, but it’s a pair of futuristic glasses that puts the virtual world directly into your eyeballs. If you haven’t seen Google’s video yet, watch it before you go any further:
Now that it’s had a little more time to sink in, people have all kinds of ideas and opinions about what this is and what it could be.
Here are some comments on Google’s Google+ post about Project Glass:
Ben Smith – I must get this as soon as possible. I’m going to try to not freak out for the rest of the day at how cool this is.
Donovan Westenburg – i’d love to have glasses like these. would miss my phone though. wonder if it will work with the phone using bluetooth
Garrett Manley – I think this if this became a reality, it would revolutionize the way the mobile industry is developing. A device like this “Project Glass” is something that I have been dreaming about for quite some time, and I know I’m not alone in this dream.
RC Concepcion – This is awesome. feature wise.. the promo highights so much of what I’d love to see out of the daily social life.. If portions of that make it into tech.. that be awesome.. Will totally be looking into what to add..
Nathan Harig – Man, I could see a lot of use here for those of us in the emergency services… having data like this in realtime handsfree would be a huge asset.
John Gruber wrote on the Daring Fireball blog, “Google’s transition into the new Microsoft is now complete: fancy-pants sci-fi concept video to promote stunningly awkward augmented reality glasses.”
1. I fail to see how wearing this technology on your face means it’s out of the way.
2. There’s some incredible Orwellian doublespeak at work here, e.g., technology that “helps you explore and share your world, putting you back in the moment.” As far as I can tell, it doesn’t help you to explore your world at all. It helps Google to explore your world. And this notion of “your” world. What does that even mean? I think Google has flat out given up on the idea of connecting people, and instead, has decided to help them curate their lives, and to play to the collective bloated ego, started replacing “life” with “world.”
And I’m glad that Google Glass will help to put me back in the moment that it took me out of.
@granulac Meredith GranI can’t wait for Project Glass to exist so a bunch of mutants can yelp “WHERE’S THE MUSIC SECTION” to no one in a bookstore1 hour ago via web · Reply · Retweet · Favorite · powered by @socialditto
@granulac Meredith Granmore incredible Project Glass features: realtime Instragram filter, homeless person detection/avoidance, Put a Hat On Everyone?1 hour ago via web · Reply · Retweet · Favorite · powered by @socialditto
Google isn’t the only tech giant doing interesting things with augmented reality.
There’s a video available for viewing (but not for embedding, unfortunately) from Design at Microsoft Research Asia (Beijing). It’s simply called SemanticMap – Vision, and is similar to Google’s Project Glass video in that it shows off more the concept than anything tangible.
Here’s the description that accompanies the video:
SemanticMap, The Next Step In Public Information and Navigation On The Go is is a Digital signage prototype featuring proximity detection, face recognition and gesture interaction technologies developed in Microsoft Research Asia. The system provides the right amount and detail of map-related information according to the user’s distance from the display.
The tagline that appears in the video is: “The information you want, on the go.”
TechCrunch says Microsoft reached out to them to show it off. Ingrid Lunden reports: “Unlike Google’s glasses, Microsoft’s technology doesn’t require the user to have any special headgear or other equipment; and it makes use of three key bits of technology that Microsoft is working on and will very likely become more and more ubiquitous in the years ahead: face analysis, gesture recognition and proximity detection. Microsoft has already been using some of this to good effect in the Kinect.”
Lunden spoke with a senior research designer with the company, who indicates that there are no plans at the moment to actually create what’s seen in the video as a new Microsoft product. That’s probably why there’s not much in the way of additional info out there about it.
The old corkboard-and-paper community billboard gets injected with a dose of cyberspace today as a new app that socializes augmented reality experiences became available to the general public. The app, Wallit, is a free geo-social app that enables people to create and exchange multi-media messages on virtual walls located around the world.
Wallit’s developers call it a new kind of geo-social network that exhibits the character of places. Its virtual walls provide a canvas for people to discover and share sentiments. The augmented walls are viewable on smartphones (as of now, it’s only available for the iPhone but an Android version is in the works), unveiling location-specific content and inviting the opportunity to connect with others who are there.
Wallit describes itself as a tool to view and post on virtual walls around the world in the same way that you might post on a friend’s Facebook page, but given that you can see the Wallet posts of anybody (and most of these anybodies who left posts you probably won’t know) on any wall my impression was more along the lines of a superimposed craigslist forum.
In essence, Wallit can best be thought of as one part clean graffiti and one part virtual community message board. “From prehistoric cave paintings to simply carving ‘Tommy Was Here’ into a tree, people have always felt a need to leave their mark to not only show others they had been there, but also to be a lasting part of a place,” said Dr. Veysel Berk, founder and CEO of Wallit. “Today, Wallit is proud to say that we have developed an app that is very much like a ‘Facebook wall for Places’ to satisfy this timeless urge to interact with and add character to ones’ surroundings through the latest location-based and augmented reality technologies.”
As much as your desire to be a lasting component of your favorite places may be, you will only be able to use pre-determined walls defined by Wallit. After I downloaded it, I tried to find a wall but apparently I have no walls around me (to spite my eyes). As Wallit’s FAQ explains, you can’t simply post on any wall – walls are created and designated by Wallit administrators, who create the walls based on the popularity of the location. “There are an increasing number of walls each day,” the site says, “So stay tuned for more walls at your favorite locations nearby.”
When I clicked “+Request Wall,” I was presented with a screen to type in something but wasn’t really told what to type. I typed a couple of generic places, like “Mall” or “Panera Bread” (which has many walls across the street from me) but I received a message that says “Could not find a wall. Click here to request one.”
I submitted my wall request but was then prompted to link my Twitter account to Wallit in order to suggest a proposal for the creation of a wall near me. I clicked “Cancel” because I didn’t want to link my Twitter account right away, but I was confusingly returned to the “+Request Wall” page.
According to the Wallit FAQ, wall proposals can be submitted through the app’s official Twitter account.
Maybe I’m revealing more about my technological deficiencies than I care to admit by saying this, but I honestly couldn’t figure out how to get Wallit to work. At this point, because it’s a new app, my tendency is to say that it’s an app better suited for vacation locales that have popular tourist attractions, like an Empire State Building or Golden Gate Bridge or even an Apple Store.
There’s a radar tab that I presume is meant to allow me to locate walls ready to be written on, but it didn’t show me anything other than what I think was the ocean-area of a virtual map. I say this uncertainly because half of the screen is obscured by a white blank.
Given how Wallit appears to work in demonstrations on its website, it looks like a fun app that lets users personalize public spaces. I just duplicate the results in real-time. If anybody has any actual success with using Wallit, we’d love to hear about it in the comments section. It’s a fun concept so hopefully as more people begin using it, more available wall spaces will be created to widen Wallit’s availability
While the concept of local search is still growing, and its potential has yet to be fully realized, the smartphone industry has allowed local, map-based search queries to be much more robust. A good example of the potential of local search, especially in regards to consumers, comes from Google Goggles, which makes use of the augmented reality technology.
AR brings a depth to local search that goes way beyond looking at placeholders with various reviews attached to them. An example of the augmented reality technology in action:
With that simple demonstration, you can see the potential for local search, and why it would be so attractive to local business owners who may not have the marketing budget of their local McDonald’s chain. That further explains why Google is now being tasked with eliminating the “Closed” spam that has been infecting the local search market.
The story came about from a New York Times report. In it, the growing affliction of local businesses finding their Google Maps entry as being closed. The article uses the owner of the Coffee Rules Lounge, located in Kansas, as an example of what Google calls “spammy closed listings.”
Apparently, the owner discovered his business was listed as “permanently closed” on their Google Maps listing, even though the coffee shop was not actually out of business. The Times article expands on the concept:
On Google Places, a typical listing has the address of a business, a description provided by the owner and links to photos, reviews and Google Maps. It also has a section titled “Report a problem” and one of the problems to report is “this place is permanently closed.” If enough users click it, the business is labeled “reportedly closed” and later, pending a review by Google, “permanently closed.”
That’s not a bad tool to have if you’re trying to bury your competition and you want potential consumers to believe the competing businesses are no longer operational.
As a direct response to the Times’ article, Google posted an entry on their Lat/Long Blog, titled, “Combatting Spammy Closed Listing Labels on Google Maps,” and in it, they promised the situation was being addressed:
About two weeks ago, news in the blogosphere made us aware that abuse — such as “place closed” spam labels — was occurring. And since then, we’ve been working on improvements to the system to prevent any malicious or incorrect labeling. These improvements will be implemented in the coming days.
Of course, Google doesn’t actually explain these improvements, much like the Times’ article discussed Google’s lack of information concerning their review process:
“Google was tight-lipped about its review methods and would not discuss them.”
Does Google’s lack of specifics trouble you or is the fact that Google is addressing this very powerful anti-competition attribute enough for you? Or would you like to see Google be more open about the processes they employ in both reviewing whether or not a business is actually closed and how they plan to counteract these “place closed” spam? Let us know what you think.
WebProNews recently spoke with Layar’s augmented reality strategist Gene Becker about the present and the future of augmented reality. Layar thinks AR will become essential to consumers’ mobile experiences.
Ogmento is a an augmented reality gaming company that says it’s helping to change the way consumers interact with their smart phones. Ogmento recently released an iPhone game called Paranormal Activity: Sanctuary (based on the popular movie franchise), that combines augmented reality, geo-social elements and user-generated content – an interesting combo. President and co-founder Brian Selzer shared some more insights on the industry with us.
The company says it has created the first game that combines geo-social elements and computer vision augmented reality to physically "check in" to a location. For example, if the GPS is saying that the user is standing near a Starbucks, and the phone’s camera is looking at a Starbucks logo, then a special reward will open for the players. This reward is proof that the player is, indeed, where they say they are, the company explains.
The user-generated content comes in as users can draw out symbols on a piece of paper and have them come to life on their iPhones.
“The combination of AR (computer vision) with LBS (location-based..) has been called the holy grail by some in marketing,” Selzer tells WebProNews. “Applications like Foursquare and Gowalla currently rely on GPS to verify where a user is. This is not accurate, and users can check in to 100 places from their couch. People are rewarded for not even going into the actual locations. With AR, it’s an extra verification that a person is actually there, because it’s about vision. The camera can recognize a logo, or a sign, and that along with GPS makes for a much stronger connection. AR also allows users to interact with brands.. mixing real world and virtual world gaming experiences. The engagement is much stronger. So now mobile marketing is about understanding where people are, what they are doing, having them interact with their brands, reward them for these interactions, all in the real world. The dynamic nature of our games allows us to create campaigns on the fly. for locations, time of day, and much more.”
“AR is about computer vision… seeing, understanding and interacting with the real world around us,” he says. “A traditional mobile game is not inherently mobile. It does not think about LBS, or interacting with the people, places, things around us. With AR, we turn the real world into the game experience. This is a fundamentally different approach to gaming, and will only grow as the technology advances.”
“LBS is already catching on quickly,” says Selzer. “Our belief is that LBS, plus great game mechanics, plus AR will make the mix much more powerful though. LBS will likely go mainstream without AR, primarily because the tech is still very nascent. As the sophistication of mobile computer vision advances though, LBS apps will take on an entirely new level of usefulness.”
As tablets are becoming hot ticket items, we asked Selzer how big of a role this emerging market plays in the growth of augmented reality. “Tablets will be great for table top based AR,” he says. “You can imagine a family around the table, viewing mixed-reality games, the same way people play board games. It will be great to experience AR on these larger platforms in the real world too, but the practicality of holding them up for any length of time is just not there. Truly, the smartphone and tablet are just a gateway to what everybody in mobile AR wants, and that’s hands-free goggles. I’m looking forward to the day when we can stop holding up devices in front of our face, and just naturally experience an added digital, contextual layer on the real world though a set of well-designed eyewear.”
This idea echoes something Layar’s Becker told us: “ In the longer term, we all like to envision a world where we have immersive displays that you can put on just like a pair of sunglasses, and then suddenly the entire world can be sort of continuously augmented with information all around you.”
“It’s early, and true AR is hard,” says Selzer. “We are seeing a lot of great new AR experiments and early games on a regular basis now. The buzz and expectations are high. Some see it as a fad, because they are judging the current state of AR and saying, ‘yeah, it’s cool, but so what?’ It’s about the ‘wow-factor’ of seeing mixed reality right now. In a couple years it will be more about having a deeper level of engagement with people, places, things in the real world… to add another layer of content onto the real world which is contextually tied to what you see. When AR apps show that they truly understand what they are looking at on a consistent basis, that’s when AR will become mainstream. For now, it’s very cool, very fun… and perfect for gaming and marketing folks.”
Currently, Ogmento only works for iOS, but Selzer says the company is “platform agnostic”. iOS is just their first platform. The company is already developing for Android, and says it will continue to move onto other platforms, other mobile devices, portable gaming devices, PC, and perhaps even console. “Basically any platform that has cameras, sensors, and supports Augmented Reality experiences is being looked at,” he says.
Late last month, augmented reality developers Layar announced that it was making its platform available to all developers of iOS apps, opening the door for a lot more innovation and practical use-cases for AR technology. "All apps and services that have a location aspect can now easily and without license costs be enhanced with an AR view of their content," Layar co-founder, Maarten Lens-Fitzgerald told WebProNews. "It fits with the new trends within the AR industry, which is the democratization of this new medium – lowering the barrier to enter the new realm of AR."
Since then, WebProNews spoke with Layar’s augmented reality strategist Gene Becker about what the future holds for not only Layar and the apps that take advantage of its platform, but for the technology and the industry as a whole. "We think of AR as really a emerging medium for creative expression and communication. It’s a medium that’s digital, that’s interactive, but it’s also uniquely physical in nature," he said.
"Think about the web back in 1994," he said. "The web was really – as we look back on it now – it was a democratization of the ability to publish – basically to put anything out on the web and connect with anybody in the world. We see augmented reality as kind of being in the early stages, a little bit like the web in 1994. That was kind of the days of black text on gray backgrounds, but it was a fundamental shift in terms of what kind of capabilities it gave people to publish and communicate with the world."
"We think it’s really important that we enable anybody to create AR experiences to augment their physical world, and that’s going to be one of the things that really helps AR to take off and become mainstream, and a part of everybody’s life," he continued.
Once Layar opens up its platform to other developer ecosystems, growth is bound to be fueled even more. Lens-Fitzgerald told us, "We are always looking to expand to other platforms," and Layar’s Layar Stream feature, for content discovery went to Android even before iOS.
"When you augment the world it’s probably going to touch just about everything eventually, but I guess if you look at the kinds of content layers that we currently have on our platform, you can kind of get a sense of the range of things that are starting to be touched," said Becker of the technology. "We have commercial layers, and things like retail store finders. We have promotions – marketing promotions for new films coming out…there are games. People are making a variety of different kinds of interactive games…there’s data visualizations – people looking at things like visualizing earthquake magnitudes in real time, looking at pollution visualization…there’s also art exhibits both from established museums as well as from ‘guerilla artists’ who were sort of appropriating AR space for their works."
"I think ultimately, it will touch everything," he said.
"I think one of the big challenges that we have is, it is early days, and up until now, a lot of people have really positioned augmented reality as this sort of really cool technology thing," Becker said. "That’s pretty typical for a new, emerging space. We really feel like one of the big challenges for this year and the next couple years is to get past that ‘wow, gee whiz technology’ thing, and really get onto the business of creating a new medium that people can use to express, to connect, and to communicate."
"The early adopters – the techies – get it," he added. "They like it, but that’s not where we’re going to add value to people’s lives more broadly."
Of course Layar isn’t the only company out there making use of AR, and Layar prefers it that way.
"There’s definitely a growing number of AR companies out there," said Becker. "Most of them are actually our good friends. It’s a small industry, and at this point, I think the fact that there is competition is actually one of the best things that we can see, because it says there really is something here. There’s a real market. It supports multiple players, and we’re looking forward to helping push the envelope along with a lot of our friends in the States."
"I think that over time, AR is really going to become an essential aspect of the mobile experience," he said. "The same way that today we think about email and social media and mapping and so forth. I think that AR is really going to be something that people use every day when they’re out and mobile. In the longer term, we all like to envision a world where we have immersive displays that you can put on just like a pair of sunglasses, and then suddenly the entire world can be sort of continuously augmented with information all around you. And I think that’s several years away still, but I think that when it gets to that point AR’s going to be almost like a sixth sense that we just rely on that we won’t know in some ways, how to do without."
There continues to be a steady buzz about QR codes, those bar code-looking thingys that can be scanned by a smart phone to link you to added content, a website or perhaps even a coupon at the point of purchase. Here’s the Wikipedia definition.
I’m not an expert in QR codes – or anything for that matter — but I’ve been around long enough to have a good idea if something is going to work or not. I’m thinking the buzz on QR codes may be short-lived – and I’d like to explain why by telling you a short story about a beer can.
One of my most interesting jobs was global marketing director for aluminum packaging products (like beverage cans). While this may sound mundane, the opportunity to nurture $2.5 billion in sales with some of the world’s biggest brands was a lot of fun!
On a customer trip, I noticed the flight attendant had a lanyard around her neck with a strange plastic device on the end. The device served as a fulcrum that she used under the tabs to open each can. I asked her why she just didn’t open the pop-tops with her fingers and she pointed to her well-manicured nails.
I suddenly realized that our humble package had a big problem. A significant part of the population — people with manicured nails — needed a secondary device to open the package. We were vulnerable! Any competing package that did not require a secondary "opener" (like plastic bottles) would be preferred by these consumers!
This revelation led to an R&D project aimed at an easier-opening lid which included a depressed "well" under the tab to protect well-groomed nails.
QR codes are vulnerable in the same way — you need an "opener" to get to the goods. Consumers will resist this, especially if there is an alternative — and there is.
Last summer I was in Bordeaux and noticed they had QR code posters everywhere to provide information on city events. I was a tourist with money to spend — their target market — but I couldn’t use the system. Problem 1: The instructions were in French. Problem 2: You had to download special software to access the information. Problem 3: As an international visitor, I would have to access expensive roaming charges just to get the code.
The "opener" in this case was a significant obstacle. If the city went to the trouble of creating posters, why not put up one up that simply had the information people needed? Why make me WORK for it?
Now suppose such a helpful poster existed … you would still have the problem of a language barrier, right? The problem could easily be solved for anybody that had a free smartphone app called WordLens. This technology is part of a swelling trend called augmented reality that I think will leap-frog the QR code innovation.
In this example, by simply holding the phone in front of the foreign language, you get an instantaneous translation and access to the information when you need it, where you need it. No instructions. No dependence on an Internet connection. No expenditure in time or expense. It’s just an extraordinarily user-friendly experience.
I don’t think you can question the power of the idea behind QR Codes but I have reservations about customer adoption. I believe augmented reality is one of the seminal technologies of 2011 and a development that could obsolete QR codes in many cases. Imagine holding your phone up in front of a city street and having discounts, movie times, even names of nearby friends overlayed on top of the buildings? Or using the phone to scan a display of shirts to immediately find your size, discounts, and matching pants and accessories?
There will probably be legitimate uses for QR codes, especially for industrial applications and logistics tracking, but I believe augmented reality may leap-frog the innovation in the consumer arena before it leaves the gate. This is just one opinion and I’m sincerely open for debate here — what’s your take on it?
As reported earlier, augmented reality app provider Layar announced that it’s making its AR technology available for all iPhone apps via the Layar Player platform. While only three apps are utilizing it so far, you can expect to see a lot more innovative AR-related features in many apps down the road.
Layar co-founder, Maarten Lens-Fitzgerald tells WebProNews, "All apps and services that have a location aspect can now easily and without license costs be enhanced with an AR view of their content. Any popular known service can use the Layar Player to add an AR view."
There is a lot of room for Augmented Reality in both e-commerce and local business as evidenced by existing products, and clearly there is demand for more.
"This product of Layar is specifically inspired by all the brands and agencies that have approached us in the last year wanting their own AR experience within their own application," says Lens-Fitzgerald. "This is how we answered this market need."
"It fits with the new trends within the AR industry, which is the democratization of this new medium – lowering the barrier to enter the new realm of AR," he adds.
On a possible Android release, he says, "We are always looking to expand to other platforms, but have nothing to announce at this time."
When Layar launched its Layar Stream feature, for content discovery, it went to Android first, so I don’t imagine that Android will be too far off for this.