WebProNews

Category: VirtualRealityTrends

VirtualRealityTrends

  • Scott Belsky: Augmented Reality Will Be Bigger than the Web

    Scott Belsky: Augmented Reality Will Be Bigger than the Web

    Adobe’s Scott Belsky says that augmented reality will be as big or bigger than the Web itself. “Augmented reality, to me, is the next major medium,” says Belsky. “I actually would go on the record saying that I think someday AR will be as big, if not bigger, than the Web because it will literally be everywhere.”

    Scott Belsky is an internet visionary who co-founded Behance which was acquired by Adobe in 2012. Belsky is currently Chief Product Officer and Executive Vice President, Creative Cloud for Adobe. He also has a new book out, The Messy Middle: Finding Your Way Through the Hardest and Most Crucial Part of Any Bold Venture.

    Scott Belsky was recently interviewed on CXOTalk with Michael Krigsman about AI and AR and how it will impact the customer experience (video below):

    The Age of Artificial Intelligence

    This is probably the most exciting part of my job is thinking about these future mediums. Everyone is going to have to take them into consideration. If those terms sounded new to any of your viewers, it’s a problem, and for a few reasons.

    On the AI side, every company has a lot of data on its customers that they need to use to make the customer experience better. Customers are going to stop being forgiving of presumptuous defaults that don’t work for them, of questions that they should know the answer to already.

    In the Age of AI, Customers Will Expect Personalization

    Customers are going to expect a personalized experience because we’re in the age of AI. That starts with instrumenting your services and products to start collecting the right data. Then it means hiring a team that can understand and starts to extrapolate some lessons from that data. Then it also means designing products to take it into account, which is personalization.

    It’s a real vector of the future. I think that companies are going to start competing on the data that they have to enable a better customer experience for their customers. I think that every company, especially big ones if they don’t start to leverage their understanding of their customers, they will be trampled by those that can. We can talk more about that.

    Augmented Reality Will Be Bigger than the Web

    Augmented reality, to me, is the next major medium. I actually would go on the record saying that I think someday AR will be as big, if not bigger, than the Web because it will literally be everywhere. It will be a layer on everything we see. We will walk down the street, and we will know who we know was everywhere and what their ratings were. Yelp will come alive to us, right? Directions will be transformed. There will be LinkedIn bubbles over everyone’s heads. You’ll have this amazing amount of knowledge and insight about everyone in every room you enter.

    Then, if you take those glasses off or whatever you’re looking through, you’ll feel somewhat dumb. You’re like be like, “Oh, my goodness. I don’t know my connections to anyone around here. I don’t know. There’s nothing left for me here. There are no remnants from when I was here last and my old notes.” You’ll want to put it back on. That is this future world.

    At Adobe, we think about the fact that that world will be very dry if it isn’t rich with creativity and content. That’s why we’re very focused on the future of augmented reality from the creative tooling and marketing analytics perspective.

    Expectation That We Can Talk to Any App or Device

    Let’s talk about Voice for a moment. I think we’re going to have an expectation that we can talk to any application or device that is in our lives and ask simple questions and get very, very quick answers. Look no further than anyone who has young kids. They can’t necessarily navigate to a song on Spotify on a phone or whatever, but they can ask for the song from Alexa, and they can use that all day. It’s very, very, very powerful. [There’s] a lot of design implications for voice interfaces as well, and that’s why these new mediums are super exciting, ripe with challenges, but everyone has to start thinking about them.

  • Oculus Exec Yelena Rachitzky Talks About How VR Can Move Beyond Gaming

    Oculus Exec Yelena Rachitzky Talks About How VR Can Move Beyond Gaming

    Most virtual reality products are aimed at gamers because there is an automatically understood natural fit. Can VR move beyond gaming? Oculus executive produce of experiences at Oculus offers her insights.

    Yelena Rachitsky, Executive Producer, Experiences at Oculus, a virtual reality technology company owned by Facebook, was recently interviewed by TechCrunch writer Lucus Matney:

    It’s Not Just About Content, Technology is Making it Easier

    We’re focusing a lot more on more highly interactive content and marrying concepts that were understanding from gaming into more narrative approaches. Instead of shooters and strategy, how do we use these mechanics of understanding on how our body works, natural intuitive mechanics to create pieces that people actually want to come back to, pieces people actually enjoy and don’t feel like they are playing a game necessarily.

    So we’re marrying that knowledge also with the form factors, I think a few people have mentioned Quest which is something we’re super excited about, so it’s not just the content it’s also the technology that’s coming and making it easier.

    Technology is Also Working to Make Things More Intuitive

    A lot of technology is also working just to make things much more intuitive. It’s a combination of how we’re approaching content being more compelling, more intuitive, more interactive, more emotional, with the form factors in the hardware. The thing I’m really interested in is how we approach experiences that have very more natural intuitive interactions versus a lot of button pressing.

    I gave this talk at Oculus Connect recently about embodiment and what makes us feel like something’s ours when they connect with an object and there’s this reality, our Facebook Reality Labs research talks about something called object believability, and we really believe that we’re picking up an object if it’s something that we recognize that we’ve done in the real world.

    The Hard Part of VR is That We Are Holding Controllers

    The hard part about VR is that we’re actually holding controllers in our hands. So how do you make your brain believe that you’re actually picking up those objects? People have approached this in different ways. With  Job Simulator (by Oculus) you have big hands that you press with really really big buttons. There’s something very rewarding about that. Then there’s a game that the studios’ team did called Lone Echo which they put a lot of effort into how the hands formed themselves around objects because if you’re seeing your hands actually shift in the way that they should in real life your brain believes that and it becomes super rewarding.

    With a lot of the projects we’re creating we’re still experimenting, we still don’t know a lot of this stuff, but we’re going all the way from fully interactive to still slightly linear. There’s not a magic formula to it, everything’s just about the intent that you want to create and then all the tools that you use for VR that push forward that intent.

  • Thinking About Using AI to Recruit New Staff? Amazon’s Failed Experiment Might Have You Thinking Twice

    Thinking About Using AI to Recruit New Staff? Amazon’s Failed Experiment Might Have You Thinking Twice

    Companies that are planning to use artificial intelligence for recruitment should think twice before doing that. A new report revealed that Amazon’s AI machine learned gender bias and weeded out women as potential job candidates. The machine even downgraded applicants based on the school they attended.

    A growing number of employers are using AI to boost the efficiency of their hiring process. The machine can be utilized to evaluate resumes, narrow down a list of applicants, and recommend candidates for the right post within a company. It can then pass on its findings to its live counterpart for human assessment. While AI is an effective tool for screening resumes, it has been shown to develop bias, as proven by Amazon’s experiment.

    Reuters reported that the retail giant spent several years developing an AI that would vet job applicants. The machine was trained to look at the resumes that the company received for the past ten years. But as most of these applications were from male applicants, the patterns the AI identified were strongly oriented to that sex. In short, Amazon’s AI learned gender bias.

    For instance, the AI developed a preference for terms like “captured” or “executed,” which were words commonly used by male engineers. The machine also began to penalize applications that included the word “women” or “women’s.” So describing yourself as the head of the “women’s physics club” was a strike against you.

    A source familiar with Amazon’s AI program also admitted that the machine even downgraded applicants who graduated from two all-women’s universities. The names of the universities were not specified in the report.

    The bias shown by the AI’s algorithm became noticeable a year after the project started, and Amazon admittedly tried to correct its AI. The company’s engineers initially edited the system to make it neutral to these specific words. However, there was no way of proving that the machine would not learn another way to sort candidates in a discriminatory manner.

    The project was eventually shelved in 2017 because company executives lost confidence in it. The AI also reportedly failed at providing choices for strong and effective job candidates.

    Fortunately for Amazon, the AI hiring experiment was just a trial run. The machine was never utilized by a larger group and was never used as the main recruiting agent. Nevertheless, the possibility is high that a qualified applicant was weeded out simply because she was a woman and did not think to use a masculine term like “capture.”

    [Featured image via Pexels]

  • WPN Today: Facebook Adds Analytics Tools to Increase Conversions for Business

    WPN Today: Facebook Adds Analytics Tools to Increase Conversions for Business

    Facebook has launched new tools in their analytics suite to help businesses using their platform to better convert visitors into customers or subscribers. At F8, Facebook’s developer conference, the company announced Journeys from Facebook Analytics that anonymously aggregates data from visitors to inform businesses on patterns that lead (or don’t lead) to sales and conversions.

    “For example, you may find the people most likely to convert first browse your website on mobile before ultimately making a purchase on desktop,” Facebook noted. “Or maybe, the people who interact with your business on Messenger ultimately spend more in your mobile app. You can then use these insights to optimize your marketing strategy and grow your business.”

    To make it work a business would simply add custom parameters to their events in Facebook Analytics enabling them to automatically receive key user insights. One of the ways to do this is via funnel conversion insights, which will provide key data on where in the process visitors tend to convert or go away.

    VR Now Available for Brands Using Facebook Messenger

    Since Facebook’s opened the Messenger Platform in 2016 to businesses and developers it has become a critical customer interaction tool for many brands. Facebook noted at F8 that there are over 8 billion messages exchanged between people and businesses each month on the platform. That is 4x more than last year and signifies widespread adoption of Messenger by businesses of all sizes including some of the largest brands.

    At F8 Facebook announced that the Messenger Platform now includes the ability for brands to integrate custom uses of augmented reality into their Messenger experience. “With this launch, businesses large and small can leverage the Camera Effects Platform to easily integrate AR into their Messenger experience, bringing the virtual and physical worlds one step closer together,” says David Marcus, VP of Messaging Products at Facebook. “So, when a person interacts with your business in Messenger, you can prompt them to open the camera, which will be pre-populated with filters and AR effects that are specific to your brand. From there, people can share the image or video to their story or in a group or one-to conversation or they can simply save it to their camera roll.”

    The value for marketers is that they can incorporate better visualizations of their products. This is especially important to brands that have products that customers feel more comfortable buying after touching, seeing up close or trying on, such as apparel. Facebook announced that ASUS, KiaNike, and Sephora will be including AR effects in their Messenger experience soon.

    If you are interested in using these new AR effecting in Messenger you can sign up for their waitlist.

    Facebook Enters the Dating Business

    Also at F8, Mark Zuckerberg announced “Dating,” a new service by Facebook. It will let users create a second profile that is specifically for the purpose of introductions to others that they might want to date or hook up with. The key here is that your friends on Facebook won’t be able to see your other profile and they also will not show up as potential matches. This will hopefully eliminate the awkward situation where somebody uses their Facebook friendship as a vehicle to hitting on another person, which in the past often led to unfriending.

    Facebook’s dating algorithm will use various signals to determine potential matches including if two people are attending the same Event or are a member of the same Group. You can assume that your bio info, likes and geography will play leading roles in making potential romantic matchups as well.

    Not everyone is impressed by Facebook’s foray into dating:

  • eBay’s New AR Feature Makes Finding the Right Shipping Box a Lot Easier

    eBay’s New AR Feature Makes Finding the Right Shipping Box a Lot Easier

    eBay has now made it easier for sellers to ship their items by using augmented reality to pick the right USPS box, the company announced in a Monday press release.

    Using Google’s ARCore platform on Android, eBay leverages motion tracking and environmental recognition to help sellers superimpose virtual shipping boxes of various sizes over a physical product.

    Aside from accurate sizing, the new AR feature will help sellers quickly compute for actual shipping costs, as well as save time from having to test boxes at the post office.

    The new feature can be found in the “Selling” part of your eBay account. To try it, tap on “Will it Fit?” option on your smartphone. You’ll then have to place your item on a flat, non-reflective surface, say a wooden tabletop, for the AR to work.

    Next, tap on your item to place the virtual box over it, then aim the smartphone camera around it to map the surrounding area. You can move around the box and look from all angles to see if the product sticks out while adding room for padding. Once you’ve picked the box, you’re now ready to ship out the item.  

    Sellers on eBay ship billions of items annually, so any innovation that simplifies the shipping process will likely be well-received.

    “By coupling Google’s ARCore platform with premiere AR technology built at eBay, we are continuing to make the selling experience more seamless,” James Meeks, eBay mobile head, pointed out. “This technology is just one example of the types of innovation we’re working on to transform eBay. It demonstrates our continual innovation on behalf our sellers to help them save time and remove barriers.”

    However, the AR feature of the updated eBay app is currently only available on a few Android ARCore-compatible devices in the US. There are plans to eventually extend the feature to iOS devices, but no timetable has been set yet.

    [Featured image via eBay]

  • eBay Plans to Add AR Features to Enhance Shopping Experience

    eBay Plans to Add AR Features to Enhance Shopping Experience

    eBay is on a quest to make shopping more interactive and enjoyable by incorporating augmented reality into the buying process. The company has even tapped the services of expert data scientist Jan Pedersen to ensure that they’re on the right track.

    In a bid to provide their clients with a better shopping experience, eBay is reportedly developing an augmented reality kit that will help customers see the product better and make shopping more dynamic. For instance, the AR kit can help drivers check how a particular tire design will look on their vehicles. It could also assist women to look at a dress or an appliance with a more critical eye. Shoppers can also use the kit to check what size box they will need for their purchases.

    eBay is currently on a roll, with a holiday quarter that saw a 10 percent increase in its merchandise volume of $24.4 billion. The season also saw around 170 million shoppers using the platform. The company wants to continue that success and is seeking to convince investors that they can also go up against a giant like Amazon. Jeff Bezos’ company is currently dominating the market with its same-day delivery system.

    Amazon might be the king of logistics, but as eBay CEO Devin Wenig told investors at a recent technology conference, it’s not the only thing that’s important. According to Wenig, price and inventory are also critical.

    eBay is known for offering one-of-a-kind products at affordable prices, but the company is also looking to improve its inventory. To that end, it is planning to add more clothing and home products to attract women and young consumers. At the moment, the retail giant’s base is geared towards older men.

    Mohan Patt, the company’s vice president of buyer experiences, revealed that eBay is pushing to maintain its growth and is looking to enhance its artificial intelligence to further improve what customers will be offered. The company aims to expand its reach beyond shoppers who already know what they’re looking for to people who are browsing the different product categories, seeking inspiration.

    This is where artificial intelligence and date will come in. According to Patt, this personalization will be the key to getting consumers to purchase items they didn’t know they wanted.

    To ensure that the company’s vision for customization and a more engaging shopping experience goes off without a hitch, eBay has engaged the services of Jan Pedersen. The renowned data scientist will be at the helm of the eCommerce leader’s AI endeavors.

    Wenig describes Pedersen as “a true pioneer” and said he joins the company at a crucial time when AI is “capable of transforming personalized, immersive shopping experiences.”

    Pedersen and his team will be responsible for developing eBay’s strategy and technology that will be used to better interact with customers.

    [Featured image via eBay]

  • Amazon Web Services Improves AI for New Consultancy Program

    Amazon Web Services Improves AI for New Consultancy Program

    Amazon has rolled out a consultancy program with the goal of assisting consumers with cloud machine learning. The company plans to do this by connecting clients with their own experts.

    Dubbed the Amazon ML Solutions Lab, it helps clients unfamiliar with machine learning to find beneficial and efficient uses of it for their company. Amazon plans to do this by integrating brainstorming with workshops to help clients understand machine learning by cloud better. The company will also be utilizing its experts to act as advisors to clients. Together they will work through the problems the company will face and then come up with machine learning-based resolutions. Amazon’s cloud experts will also be checking in with the company weekly to see how the project progresses.

    No two solutions will be alike though, as the ML Solutions Lab will work according to the needs of the business. For instance, Amazon could send their developers on site if the client wants a more hands-on approach or clients could go to AWS’ Seattle headquarters for training.

    How long the ML Solutions Lab will work with the company will also depend on the client. But it’s expected to last anywhere from 3 to 6 months.

    Companies that have more experience with machine learning can avail of the ML Solutions Lab Express. It’s an expedited program that runs for a month and begins with a 7-day intensive bootcamp in Amazon headquarters. However, this program is only offered to companies with machine-learning quality data since it’s geared towards feature engineering and building models swiftly.

    Amazon has not shared any details yet on how much the program will cost companies. No information has been posted on its website yet and company representatives are reportedly not responding to any requests at the moment.

    Vinayak Agarwal, Amazon’s senior product manager for AWS Deep Learning, pointed out in a blog post that the company has been immersed in machine learning for the past two decades. He also added that Amazon has pioneered innovations in areas like forecasting, logistics, supply chain optimization, personalization and fraud prevention.

    Agarwal further enjoined clients to take a closer look at the Amazon recommendations and fraud prevention ML Solutions Labsaying that they will have access to the experts that developed most of the company’s machine learning-based products and services.

    The Amazon ML Solutions Lab is being offered to customers worldwide. However, the ML Solutions Lab Express is currently exclusive to US clients.

    To get started with the Amazon ML Solutions Lab, visit https://aws.amazon.com/ml-solutions-lab.

    [Featured image via Amazon Web Services]

  • Oculus Rift Goes After Business Sector With New VR Bundle

    Oculus Rift Goes After Business Sector With New VR Bundle

    Oculus VR is now trying to tap into a new market to boost the sales of its high-end virtual reality device. The company is now launching Oculus Business, effectively sweetening the deal for companies to snap up its Oculus Rift bundle.

    Oculus vice president Hugo Barra made the announcement of its business bundle during the Oculus Connect conference in San Jose, Tech Crunch reported. The Oculus for Business package costs $900 and includes the hardware package, dedicated customer support, full VR license and enterprise-grade warranties.

    The hardware package consists of the Oculus Rift headset, three room sensors, three Rift Fits and Oculus Touch Controllers, according to The Verge. Altogether, the hardware only costs $574, but the $900 price of the business bundle is still considered a great deal, given the warranty, commercial license and preferential customer support that go with it.

    While virtual reality is still a relatively new technology, businesses can harness its potential to improve their reach and exposure as well as streamline their operations. For instance, VR can be utilized to aid personnel training and even allow potential customers to examine products in great detail in the virtual world. In the future, VR could be indispensable to a host of industries such as construction, manufacturing, education, tourism, and health.

    In fact, the German automobile manufacturer Audi is one of the launch partners of the Oculus for Business. The carmaker harnessed VR to build virtual showrooms where potential buyers can configure a car and even walk around it to assist them in the selection and customization process.

    Aside from Oculus VR, other virtual technology manufacturers such as Apple have been pitching their VR hardware as a business tool. Admittedly, the market for VR gear among individuals is a bit limited, as the price of the gadgets is a bit too steep for the average consumer. In addition, setting up a VR rig inside a home also requires some expertise. Businesses, however, have bigger budgets and tech-savvy staff at their disposal which makes it easier for them to buy and install VR equipment.

    [Featured Image by YouTube]

  • Microsoft Develops New Chip for Hololens 2; Promises No Lag, Real-Time Processing

    Microsoft Develops New Chip for Hololens 2; Promises No Lag, Real-Time Processing

    Microsoft revealed that its next-generation Hololens will pack quite a punch. Scheduled for a 2019 release, the new gadget will arrive with an all-new, more powerful artificial intelligence (AI) chip.

    Augmented reality (AR) technology has been steadily gaining ground in recent years. Among its most recent successes is in the gaming industry, where Niantic Lab’s AR mobile game Pokemon Go became a huge success, dominating the gaming charts in 2016.

    However, Microsoft is betting that, aside from gaming, the AR technology will have more practical applications in the real world. Thus, the software giant introduced the Hololens, a pair of AR smart glasses that can be programmed to assist users in a variety of tasks such as guiding tourists who are unfamiliar with a city, fixing engines, and even surgical operations using visualization tools.


    More Powerful Processor

    At the heart of the Hololens 2 is a new AI chip that will power the new device. According to Time, the coprocessor’s main task is to run deep neural networks, a process that parallels how the actual human brain works. The more powerful processor enables the new device to handle large amounts of data that one can expect to come from an ever-changing world at lightning fast speeds.

    No Lag Time

    Users of this new device will benefit greatly from its upgraded AI chip. The dedicated processor will ensure that the upcoming gadget will process data in real time without any lag, a necessary quality especially in systems that require split-second decisions like driving.

    Self-Contained System

    According to ARN, another advantage is that with the new AI chip in place, the Hololens 2 can be self-contained. Simply put, since the device has its own CPU, it is basically untethered and will not have to depend on a PC or a cloud service to function.

    This advantage is highlighted by Tirias Research’s Jim McGregor in a Seattle Times report. “For an autonomous car, you can’t afford the time to send it back to the cloud to make the decisions to avoid the crash, to avoid hitting a person.”

    [Featured Image by Microsoft]

  • Google Glass Makes a Comeback with Focus on Enterprise Market

    Google Glass Makes a Comeback with Focus on Enterprise Market

    Many critics viewed Google Glass as an expensive failure in the consumer market soon after the product launched in February of 2013. The lack of practicality coupled with its hefty $1,500 price tag rendered it unfavorable to the public.

    In a sad 2015 announcement, Google shut down the Google Glass website, leaving users with a short thanks for “exploring with us” and later promised that “the journey doesn’t end here.” Since the product was taken off the market, Google Glass Explorers’ Edition remained low key.

    However, despite pulling the product from the public market, Alphabet continued to supply Google Glass to US companies including, GE, Boeing, DHL, and AGCO. The pair of trendy glasses slowly found its calling in the enterprise market.

    In the hands of AGCO, Google Glass was able to reduce production times by 25 percent, while healthcare professionals found that using the product reduced paperwork loads by 20 percent. As a result, doctors were able to spend 50 percent more time with patients. Meanwhile, DHL also shared their improved working experience with Google Glass, claiming that they were able to increase supply chain efficiency by 15 percent.

    After making improvements to the Glass design and hardware, Alphabet X–Google’s “Moonshot” research and development subsidiary– reintroduced the eyewear with the name Glass Enterprise Edition. This latest version of Glass is easy to detach which makes it more shareable and affordable when deployed to different industries. It includes an impressive updated camera module with an improved resolution from 5 megapixels to 8. The new device also boasts a longer battery life, coupled with a powerful processor and an improved user interface.

    With GEE, it seems that Google has learned from the short comings of its once experimental Glass product and invested in a field where the device isn’t a mere trendy accessory, but a tool representing innovation and advancement in many fields.

  • Google Unveils PAIR Initiative to Improve Relationship Between Humans and AI

    Google Unveils PAIR Initiative to Improve Relationship Between Humans and AI

    Google announced Monday a new initiative geared towards improving the relationship between humans and artificial intelligence (AI).

    The project, called People + AI Research (PAIR), will see Google researchers analyze the way humans interact with AI and the pieces of software it powers. The team, to be led by Google Brain researchers and data visualization experts Fernanda Viégas and Martin Wattenberg, will work to determine how best to utilize AI from the perspective of humans.

    “PAIR is devoted to advancing the research and design of people-centric AI systems. We’re interested in the full spectrum of human interaction with machine intelligence, from supporting engineers to understanding everyday experiences with AI,” the website for the initiative says.

    The thrust of PAIR is to have AI in a form that is more practicable to humans, or to make it “less disappointing or surprising,” as described by Wired.

    An application of this idea would be the use of AI as an aid for professionals like musicians, farmers, doctors and engineers in their vocations. Google, however, did not go into detail on how it will go about putting this idea into practice.

    The researchers also hope to help form impressions of artificial intelligence that will enable people to have realistic expectations of it.

    “One of the research questions is how do you reset a user’s expectations on the fly when they’re interacting with a virtual assistant,”  Viégas said.

    Viégas and Wattenberg, along with the 12 full-time members of the PAIR team at Google, will also be working with experts from Harvard and the Massachusetts Institute of Technology.

    PAIR, according to Google, will “ensure machine learning is inclusive, so everyone can benefit from breakthroughs in AI.” Nevertheless, as Fortune points out, there have been questions of whether tech giants like Google and Facebook are keeping AI knowledge to themselves after hiring many highly regarded researchers in different areas of AI such as deep learning.

  • Apple Shares Source Code For Machine Learning Framework at WWDC 2017

    Apple Shares Source Code For Machine Learning Framework at WWDC 2017

    Apple’s recent WWDC (Worldwide Developers Conference) saw the unheralded release of Core ML, which will reportedly make it easier for developers to come up with machine learning tools across the Apple ecosystem.

    The way this works is that developers need to convert their creations into an API that is compatible with the Core ML. They then have to load their programs into the Apple Xcode development before it can be installed on the iOS.

    Developers can use any of the following frameworks: Keras, XGBoost, LibSVM, Caffe, and scikit-learn. To make it even easier for them to load their models, Apple is allowing them to come up with their own converter.

    According to Apple, the Core ML is “a new foundational machine learning framework used across Apple products, including Siri, Camera, and QuickType.”

    The company explained that this new machine learning tool would be “the foundation for domain-specific frameworks and functionality.”

    One of the primary advantages of the Core ML is that it speeds up the artificial intelligence on the Apple Watch, iPhone, iPad, and perhaps the soon-to-be-released Siri speaker. If it works the way that is billed, any AI task on the iPhone, for instance, would be six times quicker than Android.

    The machine learning tools supported by Apple Core ML include linear models, neural networks, and tree ensembles. The company also promised that private data by users won’t be compromised by this new endeavor. This means that developers can’t just tinker with any phone to steal private information.

    Core ML also supports:

    • Foundation for Natural Language Processing
    • Vision for Image Analysis
    • Gameplay Kit

    “Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders,” the company added.

    But Apple is reportedly not content with just releasing the Core ML. According to rumors, the company is looking to fulfill its promise of helping to build a very fast mobile platform. In fact, if the rumors are true, the company is also building a much better chip that can handle AI tasks without compromising performance.

    Though Core ML seems promising, Apple is certainly not blazing the trail when it comes to machine learning. In fact, Facebook and Google have already unveiled their own machine learning frameworks to optimize the mobile user’s experience.

    The new machine learning framework is still part of Apple’s Core Brand, which already includes Core Audio, Core Location, and Core Image as announced earlier.

  • New Gadget Allows iPhone to Print ‘Moving’ Pictures

    New Gadget Allows iPhone to Print ‘Moving’ Pictures

    Mobile printing startup Prynt has unveiled their latest gadget that can print colored photos in just 30 seconds, and makes use of augmented reality to produce “moving” pictures.

    The gadget is certainly cool, but don’t expect to get high-quality images. The printing quality of the gadget is reportedly pedestrian and doesn’t even match up to photos from Polariod. However, the faster printing speed, easy functionality, as well as better connectivity to iPhones, make it a good buy compared to most Bluetooth mobile printers.

    Here’s how it works: when the users choose an image from their iPhone to print, the application will upload a clip from the Live Photo—or the Boomerang app from Instagram—to the cloud. After they scan the static image using the Prynt app, the video will be superimposed on top.

    Prynt co-founder Clément Perrot said, “It’s the best of both worlds. You get something that is tangible, unique, but you also have a sense of the context of what happened at that time.”

    “Here’s a way to capture all of that and put it into something that people would look back at. If it stays on their phone, you don’t necessarily look at it again,” he added.

    A mobile printer is not exactly new, considering that Polaroid has its own Insta-film technology, apart from its own mobile photo printer. HP also has the Sprocket (which sells for about $130).

    Perrot hopes that the new technology will encourage people to print their most precious photos. While the convenience of camera phones allows people to take as many pictures as they want, they rarely go through the photos after uploading them on social media. In most instances, they just delete the photos after sharing them on Facebook or Instagram.

    Perrot stated that this is what Prynt is trying to fill with its mobile printer. The nostalgia that physical photos possess, where “you can touch something and go back to it.”

    For now, the Prynt app is only available for the iPhone. However, an app dedicated for Android will be launched later this year. The app uses inkless paper from Zink, which can be activated using heat.

    The Prynt pocket printer sells for $138 on Amazon. Users will have to buy another pack for the sheets of paper to be fed into the device. One pack with 40 sheets will cost $20.

  • 5 Digital Marketing Trends You Should Be Paying Attention to in 2017

    5 Digital Marketing Trends You Should Be Paying Attention to in 2017

    When it comes to digital marketing, content is still king. Content marketing comprised 20.3% of the digital marketing techniques implemented so far in 2017, although big data (crunching numbers to reveal buying patterns, for instance) is quickly gaining a foothold in online commerce.

    The point is, businesses that still do not see the significance of digital marketing to boost their presence and revenue will end up being left behind by the competition.

    According to a report from Statista, digital ad spending in the U.S. is expected to grow to $118 billion in 2020 from just a shade under $60 billion in 2015. That’s more than double in just five years. In the global scale, the amount is expected to reach over $250 billion by 2018.

    Here are just five of the digital marketing trends to watch for this year:

    1. AR & VR Technology

    The potential of augmented reality and virtual reality in business applications has never been more promising. After the gaming industry latched on to the new technology to enhance the user experience for gamers, developers have released apps that can help boost businesses. For instance, architects can make use of AR to give clients a virtual tour of what the finished product would be like. In digital marketing, businesses can exploit VR to help customers get a better picture of their vision more than any other type of messaging could.

    2. Live Videos

    Facebook Live and Snapchat Videos are just some of the platforms that can be exploited by digital marketers. Video content will dominate the scene in the next few years with Cisco predicting that 80% of consumer internet traffic by 2020 will be cornered by videos. Meanwhile, Facebook Live is growing 94% each year in the U.S. with eight billion views daily.

    Facebook was embroiled in a scandal when its video platform was used to broadcast several violent attacks, which prompted founder Mark Zuckerberg to announce the hiring of 3,000 more people to police the platform of any offensive content.

    3. Apps for Data Visualization

    Applications like Data Hero, Tableau, Dygraphs, and Visual.ly have been helping digital marketers package big data for easy consumption not just for businesses but the consumers as well. This is not exactly a new trend. However, for this year, it’s projected that businesses will make sure to exert more effort in using these tools to interpret the facts and figures at their disposal.

    4. Viral Videos Won’t Go Away Anytime Soon

    Last year, Samsung was the big winner after three of its video ads went viral. By December 2016, they already had almost 500 million views total. Viral marketing will continue to be an effective tool for brand recall. Google’s new updates, particularly on placing more importance on the social status for ranking, will really benefit businesses that invest in quality content. The downside is the short lifespan of viral video marketing. The trick is when to increase engagement, boost traffic, and convert them into income before interest wanes.

    5. Content With Short Shelf Life

    Businesses might dismiss expiring content as an effective means to build on the brand. After all, Facebook Stories or Instagram Stories only stay for about 24 hours before they are no longer seen again. Of course, this concept was copied from Snapchat, which has a similar feature. Digital marketers are basically exploiting the “fear of missing out,” which is human nature. Nobody likes to be the odd man out when everybody is talking about the latest video or when they grab the latest product, which is the reason why Kylie lip products sell like hotcakes even if they don’t really offer anything new.

  • Apple Publishes First AI Research Paper on Using Adversarial Training to Improve Realism of Synthetic Imagery

    Apple Publishes First AI Research Paper on Using Adversarial Training to Improve Realism of Synthetic Imagery

    Earlier this month Apple pledged to start publicly releasing its research on artificial intelligence. During the holiday week, Apple has released its first AI research paper detailing how its engineers and computer scientists used adversarial training to improve the typically poor quality of synthetic, computer game style images, which are frequently used to help machines learn.

    The paper’s authors are Ashish Shrivastava, a researcher in deep learning, Tomas Pfister, another deep learning scientist at Apple, Wenda Wang, Apple R&D engineer, Russ Webb, a Senior Research Engineer, Oncel Tuzel, Machine Learning Researcher and Joshua Susskind, who co-founded Emotient in 2012 and is a deep learning scientist.

    screen-shot-2016-12-27-at-10-03-16-am

    The team describes their work on improving synthetic images to improve overall machine learning:

    With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator’s output using unlabeled real data, while preserving the annotation information from the simulator.

    We developed a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a ‘self-regularization’ term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study.

    We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.

    Conclusions and Future Work

    “We have proposed Simulated+Unsupervised learning to refine a simulator’s output with unlabeled real data,” says the Apple AI Scientists. “S+U learning adds realism to the simulator and preserves the global structure and the annotations of the synthetic images. We described SimGAN, our method for S+U learning, that uses an adversarial network and demonstrated state-of-the-art results without any labeled real data.”

    They added, “In future, we intend to explore modeling the noise distribution to generate more than one refined image for each synthetic image, and investigate refining videos rather than single images.”

    View the research paper (PDF).

  • Microsoft CEO: We Are Not Anywhere Close To Achieving Artificial General Intelligence

    Microsoft CEO: We Are Not Anywhere Close To Achieving Artificial General Intelligence

    Satya Nadella, CEO of Microsoft, recently was interviewed by Ludwig Siegele of The Economist about the future of AI (artificial intelligence) at the DLD in Munich, Germany where he spoke about the need to democratize the technology so that it is part of every company and every product. Here’s an excerpt transcribed from the video interview:

    What is AI?

    The way I have defined AI in simple terms is we are trying to teach machines to learn so that they can do things that humans do, but in turn help humans. It’s augmenting what we have. We’re still in the mainframe era of it.

    There has definitely been an amazing renaissance of AI and machine learning. In the last five years there’s one particular type of AI called deep neural net that has really helped us, especially with perception, our ability to hear or see. That’s all phenomenal, but if you ask are we anywhere close to what people reference, artificial general intelligence… No. The ability to do a lot of interesting things with AI, absolutely.

    The next phase to me is how can we democratize this access? Instead of worshiping the 4, 5 or 6 companies that have a lot of AI, to actually saying that AI is everywhere in all the companies we work with, every interface, every human interaction is AI powered.

    What is the current state of AI?

    If you’re modeling the world, or actually simulating the world, that’s the current state of machine learning and AI. But if you can simulate the brain and the judgements it can make and transfer learning it can exhibit… If you can go from topic to topic, from domain to domain and learn, then you will get to AGI, or artificial general intelligence. You could say we are on our march toward that.

    The fact that we are in those early stages where we are at least being able to recognize and free text, things like keeping track of things, by modeling essentially what it knows about me and my world and my work is the stage we are at.

    Explain democratization of AI?

    Sure, 100 years from now, 50 years from now, we’ll look back at this era and say there’s been some new moral philosopher who really set the stage as to how we should make those decisions. In lieu of that though one thing that we’re doing is to say that we are creating AI in our products, we are making a set of design decisions and just like with the user interface, let’s establish a set of guidelines for tasteful AI.

    The first one is, let’s build AI that augments human capability. Let us create AI that helps create more trust in technology because of security and privacy considerations. Let us create transparency in this black box. It’s a very hard technical problem, but let’s strive toward saying how do I open up the black box for inspection?

    How do we create algorithm accountability? That’s another very hard problem because I can say I created an algorithm that learns on its own so how can I be held accountable? In reality we are. How do we make sure that no unconscious bias that the designer has is somehow making it in? Those are hard challenges that we are going to go tackle along with AI creation.

    Just like quality, in the past we’ve thought about security, quality and software engineering. I think one of the things we find is that for all of our progress with AI the quality of the software stack, to be able to ensure the things we have historically ensured in software are actually pretty weak. We have to go work on that.

  • Google Using Google Cardboard to Bring a Virtual Gay Pride Parade to Students

    Google Using Google Cardboard to Bring a Virtual Gay Pride Parade to Students

    Google shared a story today on their Education Blog about their efforts to use their virtual reality in a box solution, Google Cardboard, to demonstrate diversity and gay lifestyles to students across the world.

    “Earlier this year, we launched #prideforeveryone, a global virtual reality Pride parade that anyone, anywhere could join,” said the Editor of Google’s Education Blog. “Since then, we’ve distributed Google Cardboard and the virtual Pride experience to more than 20 groups and nonprofits, worldwide.”

    He added, “This is the story of Alba Reyes, founder of the Sergio Urrego Foundation, who brought the parade to students in Bogota, Colombia.”

    “In 2014, my son Sergio took his own life because he was suspended and discriminated by his school for kissing another boy. Unfortunately, neither I nor his friends were able to prevent the harassment and isolation he felt.

    screen-shot-2016-12-13-at-4-48-54-pmSince then, I’ve made it my mission to make sure what happened to Sergio doesn’t happen to any other young person in my country. I started the Fundacion Sergio Urrego to travel to schools across Colombia and lead inclusion workshops with local students. Although LGBTQ children may be more likely to feel isolated, many young people don’t feel accepted by their families, friends or teachers. My workshops create activities and safe spaces that help students understand how it feels to be discriminated against – reinforcing the importance of diversity and inclusion.

    An important part of these workshops is helping students put themselves in another person’s shoes. This summer, we used Google Cardboard to give students in my workshops a way to experience Pride parades from across the globe. Most of these students have never seen a LGBTQ Pride parade. But with virtual reality, they can learn more about the global LGBT community, and feel supported by a global community that celebrates diversity.

    After seeing the impact of my workshop and virtual Pride parade on children in Colombia, institutions like the Ministry of Information and Communication Technologies have have showed their support to scale my workshops to even more children across the country.

    My fight is not just for my child. It’s for all children who have endured discrimination and bullying from their peers, teachers and community.”

    Read more here…

  • Highly Anticipated Oculus Touch VR Controllers Now For Sale

    Highly Anticipated Oculus Touch VR Controllers Now For Sale

    Facebook has announced that the highly anticipated Oculus Touch VR Controllers are now available for purchase. Touch allows you to use your hand gestures intuitively within the virtual world, making for a much more immersive experience.

    giphy

    Facebook is also releasing 54 games and experiences today, each designed to make immersion and social interaction more authentic and to give users a reason to order the new Oculus Touch.

    They say it’s different by design:

    Thanks to Touch’s intuitive and ergonomic design, you forget the controllers and feel like your physical and virtual hands are identical.

    Precise tracking and ergonomic handle design work in tandem to bring hand poses and social gestures into VR for a more immersive experience. The diagonal rather than vertical grip lets your thumb and index finger move autonomously, which helps prevent accidental triggers and puts you in control. Thanks to balanced weight distribution, Touch feels natural in your hand so you can relax and enjoy the virtual world.

    Hand presence opens up new opportunities to interact with others while experiencing VR—and this is just the first step. We can’t wait to see the next wave of immersive content made possible with Touch.

    — The Oculus Team

    What people are saying…

    “I got mine yesterday, its money well spent, its far better than I imagined it could be,” said David Batty, a Director at Code College, his own startup. “The first contact demo is excellent as an intro to it.”

    “You people are doin gods work,” says Daron Thomas. While Nanci Schwartz says, “I’m not sure what planet I’m on anymore. OMG.”

  • Zuckerberg: Virtual Reality will be the Most Social Platform Ever!

    Zuckerberg: Virtual Reality will be the Most Social Platform Ever!

    Mark Zuckerberg says that virtual reality will be the most social platform that ever existed. Zuckerberg hosted an in-person Q&A in Rome, Italy Friday when he focused in on virtual reality when asked if Facebook change our lives as much as Pokemon Go. “The real reason I came to Rome was to find some rare Pokemon,” Zuckerberg replied. “In all seriousness, I think that virtual reality and augmented reality are going to be the most social platform that has ever existed.”

    Last year Facebook paid $2 billion for crowd-funded Oculus Rift in order to enter the space running. “Oculus’s mission is to enable you to experience the impossible,” proclaimed Zuckerberg when announcing this acquisition. “Their technology opens up the possibility of completely new kinds of experiences.”

    Clearly, Zuckerberg sees VR as a huge boon for social but even more for advertising where it is believed to have great potential. According to research by Digi-Capital augmented reality and Virtual Reality are predicted to be a $150 billion industry by 2020. That’s why so many companies are focusing on VR and AR including Google.

    “This is why advertisers are so interested in VR,” said Aaron Luber who is in charge of Google and YouTube partnerships. “Emotion sells products much more than utility and that reality positions Virtual Reality as a game changer in the advertising industry.”

    Facebook recently bought another small VR startup called Two Big Ears, which helps bring an immersive audio experience into VR and AR. Facebook is calling it the Facebook 360 Spatial Workstation which can be downloaded for free.

    “If you think about the history of computing every 10 or 15 years a big new computing platform comes along,” Zuckerberg added. “We had desktop computers, then browsers and the internet, now we have mobile phones and each one is better than the one before it, but what we have now is not the end of the line. We are going to get to a point in 5-10 years that we are all using augmented or virtual reality.”

    Zuckerberg explained how virtual reality makes you feel like you are really there or “present” as he put it. “If you look at a photo or a video on a screen, TV, computer or phone and you are trying to get your mind set in this perspective as if you were there,” he said. “You see the photo and you are trying to imagine what it’s like to be there. Virtual reality is different, because it’s programmed to work exactly the way that your brain does. When you look at it you feel like you are in that place, like you are present and you are trying to convince yourself that you aren’t actually there because if you look around what you are seeing feels like the real world.”

    He sees a future that is vastly improved socially because VR will make the social experience feel like reality. “You can imagine in the future you are going to feel like you are right there with another person who couldn’t actually be with you,” says Zuckerberg. “I think about my family when I’m not there such as my daughter who is in California right now. I miss her and to really feel like I am there right now would be a really powerful experience that I would want to have.”

    He noted the differences between virtual reality where you feel completely immersed and augmented reality where you are adding virtual elements onto the world. Zuckerberg predicts that AR will come to mobile phones first “before we get some kind of smart glasses that overlay stuff on the world.” He says that we going to see more apps like Pokemon Go and that Facebook itself tested a form of AR with its Masquerade Filter that was test launched at the Olympics in Brazil and in Canada, where people could support their country by putting face paint on.

    “I feel there are going to be a bunch of tools like that overlay real things from the world on top of your experience and help you share things that we are going to see soon,” says Zuckerberg.

  • A YouTube Introduction to 360-Degree Video and Virtual Reality

    A YouTube Introduction to 360-Degree Video and Virtual Reality

    “360 is a video format that unlike traditional video, is capturing in full 360, so full spherical,” commented Kurt Wilms, Senior Product Manager at YouTube. “When we talk about 360 at YouTube, we’re talking about filming content and viewing content in this new video format.”

    Screen Shot 2016-08-08 at 1.50.48 PM

    “VR at YouTube is all about how viewers are experiencing this,” Wilms said. “VR means you’re watching the content in a specialized headset that allows for really immersive viewing and really transports you in a way that isn’t possible if you’re watching on your phone or on your desktop computer.”

    Screen Shot 2016-08-08 at 1.55.05 PM

    YouTube has developed a special camera to create 360-degree videos:

    Screen Shot 2016-08-08 at 1.52.25 PM

    “360 video as a format has been out for a while but up until recently, people haven’t had a great way to consume in a way that was immersive,” noted Bryce Reid, a User Experience Designer for YouTube. “What I’m really excited about is now that we have all these new technologies for viewing the content, it can really transport people. So I don’t think of it as using a camera to create a scene, I think of it as like a personal teleporter where you can send people to a new place. And that’s really exciting.”

    Screen Shot 2016-08-08 at 1.56.30 PM

    “Telling traditional stories but putting a new spin on them with 360 is something that all creators should be thinking about across all content verticals,” says Wilms.

    Screen Shot 2016-08-08 at 1.57.46 PM

    “When I’m filming in 360, I try to place myself inside that camera,” says Reid. “I want to make sure that the scene that I’m telling captures their attention and focuses them in the direction that I want them to be looking, rather than aiming. So if you really want someone to look that way, do something over there and then that will gather some people’s attention.”

    Screen Shot 2016-08-08 at 1.58.48 PM

    “Whether it’s up down, left or right and really the viewers feeling like they’re present there, they’re actually in the film maker’s shoes.” says Wilms.

    Screen Shot 2016-08-08 at 2.00.37 PM

    “There aren’t do’s or don’ts yet and I think that we’re looking for creators to define that,” said Reid. “We know that there’s some basic rules about comfort, like you know, don’t spin people around and make them sick but other than that, it’s really wide open.”

    “Getting started with 360 is actually really easy,” Wilms said. “You can go out and buy a camera, consumer grade cameras that aren’t very expensive. They integrate with YouTube. You can start filming immediately and upload your content directly to YouTube and start exploring and filming and seeing what works and what doesn’t for your content. Viewers with a mobile phone, with a desktop computer, with a smart TV will be able to watch your content without any special hardware and really allow viewers to experience things like they’ve never experienced before.”

    Screen Shot 2016-08-08 at 2.02.39 PM

    “360 is best when you want someone to be really close to something and to feel like they were there,” commented Reid. “To form a memory of it. It’s not really like a thing that I watched, it’s a place I was at.”

    “One of recent examples is creators are starting to use 360 to live stream,” says Wilms. “Doing things like putting a 360 camera on a red carpet at an award show to give viewers a whole new perspective and watch things live like they never could before.”

    Screen Shot 2016-08-08 at 2.04.59 PM

    “I’m really excited to see all of the traditional things applied to VR and applied to spherical video and see what sticks,” said Reid. “We really don’t know the answer yet. There’s going to be whole new genre’s that we don’t even know about that are going to emerge from this and it’s really exciting about what that’s going to be.”

  • How Will Artificial Intelligence Change the Workplace?

    How Will Artificial Intelligence Change the Workplace?

    LinkedIn Influencers provided some advice on the future impact of artificial intelligence in the workplace. “The first thing that artificial intelligence will take over is the microphone on your laptop or computer,” said Reid Hoffman, Executive Chairman and co-Founder of LinkedIn in a new 30 second Influencer video.. “What will happen is, it will start listening to your meetings, when you are talking and other things. It will record you in order to help take notes, suggest action items, and suggest other people to communicate with.”

    “I wrote about this in a recent article in the MIT Sloan Management Review“, said Hoffman. In the article, Using Artificial Intelligence to Humanize Management and Set Information Free he says that “we are on the cusp of a major breakthrough in how organizations collect, analyze, and act on knowledge” and he thinks this will bring important changes to business decisions.

    “Artificial Intelligence is about to transform management from an art into a combination of art and science,” he says. “Not because we’ll be taking commands from science fiction’s robot overlords, but because specialized AI will allow us to apply data science to our human interactions at work in a way that earlier theorists like Peter Drucker could only imagine.”

    “Artificial intelligence, machine learning, chat box, guided conversations are coming at us faster than ever,” notes Steve Anderson, President of The Anderson Network and an expert on insurance technology, productivity and innovation. “I think the first functions that will be taken over in an organization is augmenting existing employees and helping them enhance their ability to create a great customer experience. Getting those routine questions answered automatically to free up their time to deal with customers more detailed questions.”

    “I pray anyhow that it will be in every aspect of my schedule and every aspect of meetings and all the notes taken in meetings and all the followup that can be executed from them, cutting meetings massively and making much more efficiency,” says Christopher Schroeder, CEO-in-waiting, Advisor and Venture Investor and Author. “I can only hope that the robots are as kind to me as the executive assistants that I’ve been blessed to have over the last decade, but we’ll see.”

    “The first thing that artificial intelligence is going to take over in my office is scheduling meetings,” says Nicholas Thompson, Editor of NewYorker.com, the website of The NewYorker Magazine. “It’s going to take over lots of stuff in the long run, but the first stuff that it’s going to get is stuff where the language inputs are relatively simple and relatively contained. That’s true of meetings, where you say I’m going to meet with you Wednesday and Friday. That’s a complex sentence, but it’s something we’re close to figuring out. The companies trying to do this, I’ve used one called x.ai, there not totally there but they will get there. I’ll be grateful when they do because scheduling meetings stinks!”

    “The first thing that my office AI will take over is customer service,” said Leila Janah, CEO of Sama Group, Co-Founder and CEO of Laxmi, and an award-winning social entrepreneur. “It’s already taken over things like security, and probably some texting with customer service via bots. I think bots are impacting so many different businesses. I already have a bot that replies to most of my messages. We already have an auto-reply on Facebook to my company page, I’m sure it’s going to take over more.”

    “First thing in my office AI will take over is probably the social media feed,” says Ian Bremmer, President of the Eurasia Group. “Let’s face it, we spend too much time on it and they will be better at it. They’re going to make us look shinier and even more attractive than we already are. They’ll optimize for it and we will be able to do other things with our lives. How awesome! That’s great… before they take over.”

    “Let’s face it, we are already seeing AI systems improve text entry, thinking about Google’s Inbox app where it can predict likely response,” commented Azeem Azhar, who is Vice President, Venture & Foresight for the Schibsted Media Group. “I think we will be seeing more of that coming into Gmail and word indexing software. These machines are going to help us compose our messages, either to automate replies or help us to be better writers.”

    “Email. That’s the first thing that artificial intelligence is going to take over in the office,” replied Tomasz Tunguz, Venture Capitalist at Redpoint. “I already dictate most of the email I send everyday, and because we can speak about 3 times faster than we can type, it’s far more efficient.”

    “One of the ways artificial intelligence is going to take over my office, is that it’s going to replace in some situations coaches, if you can believe that,” says J.T. O’Donnell, CEO of CAREEREALISM & CareerHMO. “In career coaching there are a lot of typical situations that people will encounter where they need to have an interaction with a coach and get the information and the feedback they need in that moment. But believe me or not, there are so many of those that are similar that we could use artificial intelligence to engage people through those conversations. I expect to see us use that a lot in the future.”

    “The first thing I would like it to take over is the sales cycle,” said Sramana Mitra, Founder at One Million by One Million (1M/1M). “A lot of people come to our website to find out about us and then contact us whether it’s by email or phone or whatever. I think the most productive thing would be to really automate and personalize in a meaningful way, using AI and the application of AI to meaningfully impact the sales cycle.”

    “I have to believe that the first thing in my office that AI will take over is making phone calls,” says Jon Steinberg, Founder and CEO of Cheddar. “Making phone calls is actually the only thing that has gotten worse over the past ten years, whether it’s the signal, the process for dialing, everything. I have to think that soon when we want to get in touch with people you will be able to just express a desire or it will look at the documents or processes you are working on and the voice call or video call will be automatically initiated.”