Apple is preparing to make one of the biggest changes to Siri since the virtual assistant’s debut, ditching the “Hey Siri” voice activation.
According to Bloomberg’s Mark Gurman, via 9to5Mac, Apple is planning to change Siri’s activation phrase to simply “Siri.” As 9to5Mac points out, the move will bring Apple’s virtual assistant more in line with its competitors, such as Amazon’s Alexa.
Gurman says the change has been underway for several months, but involves “a technical challenge that requires a significant amount of AI training and underlying engineering work.”
The feature is expected to reach customers in 2023 or 2024.
Apple has said it inadvertently recorded some customers’ Siri interactions, even when the setting was disabled.
Apple gives users the choice to share their Siri interactions in an effort to improve the virtual assistant. If the option is enabled, Apple can store and analyze those recordings.
Evidently, when iOS 15 was released, a bug activated the feature for some users, despite them previously deactivating it. As soon as Apple realized the issue, it took steps to rectify it.
“With iOS 15.2, we turned off the Improve Siri & Dictation setting for many Siri users while we fixed a bug introduced with iOS 15,” Apple spokesperson Catherine Franklin told The Verge. “This bug inadvertently enabled the setting for a small portion of devices. Since identifying the bug, we stopped reviewing and are deleting audio received from all affected devices.”
Apple has not disclosed how many users were impacted, although the company says the bug impacted “a small portion of devices.”
As ZDNet highlights, it appears the bug fix resets the permission warning as well, with iOS 15.4 asking users for permission to use their recordings.
Siri has spilled the beans on Apple’s next event, indicating it will be held on Tuesday, April 20.
Apple watchers have been expecting the company to hold an event to release updated iPads and, possibly, the much-anticipated AirTags. Several potential dates have come and gone with no word from Cupertino. Adding to the guessing game is reports that Apple was moving some iPad and MacBook Pro production to the second half of the year, due to chip shortages.
It appears Siri was the first to know when the date would happen, hours before any official announcement from Apple, with the virtual assistant declaring April 20 as Apple’s next event.
Ask Siri: “When is Apple’s next event?”
Siri will respond: “The special event is on Tuesday, April 20, at Apple Park in Cupertino, CA. You can get all the details on Apple.com.
For several hours, however, no new event was listed on the events page linked in the Siri response. Just moments before publishing this article, the Apple’s site was updated to confirm the event.
Apple may be on the verge of a significant improvement to Siri, giving the virtual assistant the ability to whisper or shout depending on circumstances.
Amazon clearly demonstrates the benefits of an adaptable virtual assistant in a commercial where a father is trying to impress his daughter with his knowledge of history. The father relies on Alexa’s ability to whisper information to him, which he then passes on to his daughter.
Despite being the first major virtual assistant on the market, Siri still lacks this ability, although it appears Apple is preparing to address that. According to a patent application, first noticed by AppleInsider, Siri will soon have the ability to change its volume based on background noise, room layout and the volume of the person speaking to it.
The decision component may select one or more speech synthesis parameters corresponding to the speech output mode. The decision component may also, or alternatively, select a playback volume. The one or more speech-synthesis parameters, when incorporated in a speech-synthesis model, can cause a speech mode of the synthesized speech to match the speech mode of the utterance.
In other cases, the one or more speech-synthesis parameters, when incorporated in a speech-synthesis model, can cause a speech mode of the synthesized speech to differ from the speech mode of the utterance. In some cases, the decision component may select a speech synthesis model from a plurality of speech synthesis models corresponding to the speech output mode.
This will be a welcome improvement to Siri, and hopefully help it close the gap with its newer rivals.
A new report sheds light on the AI industry, with Apple the top company for acquiring AI startups.
According to GlobalData, Apple, Google, Facebook and Microsoft bought 60 AI companies between 2016 and 2020, 25 of those purchases by Apple. Improving Siri is likely a driving motivator behind the purchases.
“The US is the leader in AI, and the dominance of US tech giants in the list of top acquirers also indicate that these companies have some defined AI objectives,” said Nicklas Nilsson, Senior Analyst on the Thematic Research Team at GlobalData. “For instance, Apple has been ramping up its acquisition of AI companies, with several deals aimed at improving Siri or creating new features on its iPhones. Machine learning start-up Inductiv was acquired to improve Siri’s data, Irish voice tech start-up Voysis was bought to improve Siri’s understanding of natural language, and PullString should make Siri easier for iOS developers to use.
“Apple has gone on a shopping spree in efforts to catch up with Google (Google Assistant) and Amazon (Alexa). Siri was first on the market, but it consistently ranks below the two in terms of ‘smartness’, which is partly why Apple is far behind in smart speaker sales. Apple also want to make sure to keep its strong position within wearables. It is the dominant player in smartwatches. The acquisition of Xnor.ai last year was made to improve its on-edge processing capabilities, which has become important as it eliminates the need for data to be sent to the cloud, thereby improving data privacy.”
Apple has rolled Siri out across its lineup of devices, making it more important than ever for the virtual assistant to be as good, or better, than its rivals.
The rumored Apple Car is likely another reason why the company is investing so heavily in AI. With self-driving cars viewed as the next evolution of the automobile, Apple needs to ensure its AI technology is up to whatever plans it has.
Apple Maps has been updated to display COVID-19 vaccination locations, making it that much easier to set up an appointment.
As the US rolls out COVID-19 vaccines, one of the biggest challenges is finding a location and setting up an appointment. Some have relied on their local pharmacy, or used websites such as those provided by local governments.
Apple is making it a bit easier, integrating vaccination locations in Apple Maps.
Apple today updated Apple Maps with COVID-19 vaccination locations from VaccineFinder, a free, online service developed by Boston Children’s Hospital that provides the latest vaccine availability for those eligible at providers and pharmacies throughout the US. Users can find nearby COVID-19 vaccination locations from the Search bar in Apple Maps by selecting COVID-19 Vaccines in the Find Nearby menu or by asking Siri, “Where can I get a COVID vaccination?”
The placecard will include operating hours, phone number, address and a link to the provider’s website. Although the data is being provided via VaccineFinder, providers and businesses can also submit their information directly to Apple.
Along with the data provided by VaccineFinder, healthcare providers, labs, or other businesses can submit information on COVID-19 testing or vaccination locations on the Apple Business Register page. Once validated, Apple may display information about the testing or vaccination locations to people using Apple services such as Apple Maps.
Apple’s move is good news for Mac and iOS users, removing one more hurdle to people accessing the vaccine.
Reports indicate Apple may be working on its own search engine, a move that would have far-reaching repercussions.
Apple and Google have a long-running deal, whereby Google pays Apple billions to be the default search engine on iOS devices. Apple has alternately used Bing and Google to power Siri’s search features over the years. With iOS and iPadOS 14, however, Siri will bypass Google search results page, instead taking the user directly to the site. This would seem to indicate Apple is beginning to distance itself from third-part search engines
In addition, there has been a noticeable uptick in Apple job postings calling for search engineers. Coywolf founder Jon Henshaw has noticed Apple’s web crawler, Applebot, has been crawling his website daily. Apple has also updated its information on Applebot.
There’s a number of things Apple could gain by unveiling its own search engine. First and foremost, it would give Apple the ability to deliver on its promise to protect user privacy. No matter how much Apple may work to do that on users’ devices or its own services, when they use Google or Bing, they give up much of their privacy to those companies and their partners. Apple could build a search engine that features the same industry-leading privacy as their other products.
In addition, as Henshaw points out, Apple could customize the experience in a typical Apple way, providing something unique that offers an entirely new take on search. Whatever Apple is working on, it may well upend the search industry as we know it.
In a first ever, Apple held a 100% digital version of WWDC, bringing welcome improvements across all of the company’s platforms.
CEO Tim Cook began the conference, taking the opportunity to address the major issues the world is facing, especially racial inequality and the coronavirus pandemic. Cook pointed out that it was more important than ever for Apple to continue to innovate, supporting its users and being a positive force for change.
He then turned the program over to Craig Federighi to highlight some of the changes to iOS
iOS 14 Home Screen
Federighi immediately launched into some of the biggest changes coming in iOS 14, including an improved Home Screen.
The iOS Home Screen has remained largely unchanged over the years, adding only incremental improvements, such as Folders. With iOS 14, Apple’s mobile OS finally offers substantial improvements to the Home Screen, giving users the option to hide entire pages of apps. In their place, iOS has an App Library view that automatically groups apps according to category, and makes recommendations based on usage.
iOS 14 also includes the ability to add widgets directly to the Home Screen, with apps rearranging around them.
Picture-in-Picture (PiP)
Apple is bringing one of the most popular features of the iPad to the iPhone in iOS 14, namely PiP. The feature will work similarly to its larger counterpart, letting users watch videos while working on other apps.
Siri
Siri receives some welcome upgrades as well. First and foremost is on-device dictation. Un previous versions, every interaction with Siri requires internet connectivity. While searches will still require an internet connection, dictation will be done entirely on the iPhone.
Siri also benefits from a streamlined interface, displaying as a small bubble at the bottom of the screen, rather than taking up the entire view.
Translate
Apple is unveiling its own translation software, but with a typical Apple flair. The software will automatically keep up with who is saying what, translate and display the results accordingly.
Messages
Messages includes some major upgrades, including inline replying and mentions. Users can set their group message notification settings to only notify them when they are directly mentioned in the thread.
Another welcome benefit is the ability to pin conversations to the top of the list, making it easier to refer back to popular or important threads.
Maps
Maps has been upgraded to include information for cyclists, including where they will have to deal with stairs, and giving them the option of avoiding stairs altogether.
Maps will also include information to help electric vehicle owners to find charging stations and plan their trips accordingly.
CarPlay
CarPlay is getting a major new feature that will allow an iPhone to lock/unlock and start a compatible car.
CarPlay will use NFC to create a digital car key that is securely stored on the iPhone. Additional keys can be created and shared with others, so someone else can access the vehicle if needed. The feature will also be brought to iOS 13.
Individuals concerned about whether they have coronavirus will be able to get a virtual checkup just by asking Siri.
Apple has updated Siri to walk individuals through the U.S. Public Health Service questions to determine their risk and whether they need to take further action or wait it out.
“Hey Siri! Do I have coronavirus?” will prompt the digital assistant to ask the following:
Depending on the answer, Siri will respond with:
Again, depending on the answer, Siri may respond with:
This is just the latest example of how technology can be used to assist over-worked medical staff. Virtual assistants and artificial intelligence can act as a sort of early triage, helping individuals know when they should seek medical attention and, at the same time, save medical professionals from individuals who may be worried about nothing.
Apple may (finally!) be on the verge of allowing other apps to be set as the defaults in iOS.
Since iOS was introduced, users have not been able to change the default apps, such as Mail, Safari and Music. While other apps could be installed and used, they could never be set as the defaults. Any clicked web links would still open Safari and any clicked email links would still use Mail. While Safari and Mail are both extremely capable programs, there are other apps that offer different advantages and, in some cases, are better. Microsoft Outlook, for example, routinely wins praise for its features, not to mention integration with the rest of Office.
According to Bloomberg, people familiar with the matter say Apple may finally be ready to give up some control and let users set their preferred apps as the defaults. The move is being considered amid ongoing scrutiny and accusations that Apple’s apps have an unfair advantage over its rivals, an argument that certainly has weight to it. While some tech savvy individuals may opt to use other apps for browsing and email, and jump through the necessary hoops to make it work, the average user will simply use the easiest option.
Apple is also said to be considering a similar change for the music service on the HomePod, as well as letting users change the default music service when using Siri on an iOS device. While the sources say nothing has been finalized, it’s possible these changes could happen later this year.
Apple has been pushing the iPad as a computer replacement for some time. This is an important and necessary step that should have been taken years ago to assist that goal. Hopefully, with new opportunities available, it will further encourage developers to create desktop-class apps in categories they otherwise might have ignored.
According to VentureBeat, Apple has acquired artificial intelligence (AI) startup Xnor.ai for roughly $200 million.
Xnor specializes in “the efficient deployment of AI in edge devices like smartphones, cameras, and drones.” Apple has been making increasing inroads in the AI field, first with Siri and then with the Neural Engine hardware on the company’s line of chips used in iPhones and iPads. With Apple’s emphasis on privacy and security, however, the company is trying to do as much as possible on-device, rather than relying on the cloud.
Xnor’s technology is a perfect fit for Apple, as it would help the company expand the AI capabilities of its devices. According to GeekWire “the three-year-old startup’s secret sauce has to do with AI on the edge — machine learning and image recognition tools that can be executed on low-power devices rather than relying on the cloud.”
Geekwire also believes the acquisition is likely to help improve the iPhone’s image processing features.
“The arrangement suggests that Xnor’s AI-enabled image recognition tools could well become standard features in future iPhones and webcams.”
Apple made headlines when Siri first debuted, but quickly lost its lead in AI as Google, Microsoft, Amazon and countless startups focused more heavily on the burgeoning field. Apple has a long history of leapfrogging it’s competition, however, so it will be interesting to see what role Xnor will play in the company’s AI endeavors.
Leading up to CES 2020, Samsung teased Neon, an artificial human. Details were sparse, and Samsung said little other than to confirm Neon was an all new endeavor and had nothing to do with their existing AI engine, Bixby.
At CES 2020, Samsung finally showed what Neon is: a virtual, “artificial human” avatar, according to TechRepublic. Unlike an AI assistant, Neon is not designed to be a source of information, or have the answers to every question put to it. It’s designed to be a personal companion, one that learns and evolves just as a human being would.
Pranav Mistry, CEO of Neon and head of Samsung’s STAR Labs set out to see if technology and AI could become more human-like. The end result is an AI that “can have conversations and behave like humans, and they will form memories and develop new skills. However, each one is unique, with its own personality that can develop over time.”
In many ways, the technology sounds similar to S1m0ne, the movie starring Al Pacino about a movie producer who creates a virtual actress. Beyond the science fiction novelty, however, Neon has the potential to be used in a wide range of practical applications, such as interpersonal training or companionship.
In the meantime, Neon is still several years away from public availability. Until then, we’ll just have to keep talking to Siri and Alexa.
The ability to talk with an artificial intelligence (AI), be it a computer or robot, has been a staple of science fiction for decades. Despite modern advances, anyone who has used Siri, Alexa, Cortana or the Google Assistant knows we’re still a ways off from what’s portrayed in science fiction.
Chinese tech giant Baidu has just taken a big step in that direction, however. According to the MIT Technology Review, Baidu has leapfrogged Microsoft and Google in helping AI better understand language.
General Language Understanding Evaluation (GLUE) is the industry benchmark used to gauge an AI’s language comprehension skills. For perspective, most humans manage a score of 87 out of 100. Baidu’s model, however, scored a 90—a first for AI models.
The team attributed their breakthrough with ERNIE (Enhanced Representation through kNowledge IntEgration) to the steps they needed to take in order to help it understand Chinese. The most advanced AI language models use a technique called “masking,” where the AI randomly hides words in order to predict the meaning of the sentence. Because of the differences between Chinese and English, Baidu “researchers trained ERNIE on a new version of masking that hides strings of characters rather than single ones. They also trained it to distinguish between meaningful and random strings so it could mask the right character combinations accordingly.”
Not only did this method allow ERNIE to better understand Chinese language, but those lessons also improved its English processing, enabling it to achieve the highest GLUE score yet. Hopefully, this breakthrough will help pave the way for the type of AI interactions that have, so far, existed only in the realm of science fiction.
There’s no doubt that virtual assistants and AI-based voice services are one of the next big things in the technology industry. Long the stuff of science fiction, voice-based computing represents the next leap in computer interface and usability paradigms. As a result, virtually all the major players are pushing ahead with development.
It should come as no surprise that Amazon, one of the biggest players in the voice-enabled market, has announced the Voice Interoperability Initiative. The initiative is an effort to standardize how voice-enabled products work and “is built around a shared belief that voice services should work seamlessly alongside one another on a single device, and that voice-enabled products should be designed to support multiple simultaneous wake words.”
Already, more than 30 companies have signed on to the initiative, including the likes of Microsoft, Salesforce, Logitech, Qualcomm, Libre, Intel, Spotify and others.
“Multiple simultaneous wake words provide the best option for customers,” said Jeff Bezos, Amazon founder and CEO. “Utterance by utterance, customers can choose which voice service will best support a particular interaction. It’s exciting to see these companies come together in pursuit of that vision.”
While the initiative’s goals look good on paper, there are some challenges. Notably, the idea of having multiple voice services working on a single device may not fly with some of Amazon’s competitors. Indeed, Apple, Google and Samsung are noticeably absent from the initiative.
In the case of Apple, given their strong pro-privacy stance, it’s unlikely they will want to put Siri on hardware made by a competitor. Similarly, Google may be hesitant to give up the control that comes with their Google Home hardware.
Whatever the outcome, one thing is clear: Voice-enabled services is shaping up to be another technological battleground between some of the biggest names in the industry.
The Co-Founder & former CEO of Siri, Dag Kittlaus, says that he “would have liked to see Siri evolve to doing more things, greater capabilities to become a bigger part of your life.” Siri was acquired by Apple in 2010.
In 2012 Kittlaus co-founded another AI company, Viv, an artificial intelligence platform that enables developers to create an intelligent, conversational interface to anything. Viv was acquired by Samsung in 2016.
On the positive side, it’s gotten a lot faster, the speech recognition got a lot better. I would have liked to see Siri evolve to doing more things, greater capabilities to become a bigger part of your life merely because it’s doing so many more things for you.
That was really the idea for the next company that we started, was how do we make it go from, not a novelty, but from a basic utility in your life to something much bigger, a paradigm in itself that you are really relying on in your everyday world.
Why Hasn’t Apple Gotten There and is it Apples Fault?
To some extent, but I just think they had a different focus from where we started it originally. I would have liked to see Apple open up to a third party ecosystem much earlier. That’s something that we are doing now. We think that is the big missing piece.
The Apple app store is actually a perfect metaphor for this. The iPhone actually launched in 2007 with just a few Apple apps on it, weather and some very basic things. When the app store opened and unleashed the creativity of the developers around the world that changed the world.
About Viv from the Original Announcement:
For consumers, Viv is going to be the intelligent interface to everything you’re going to be talking to all different kinds of things it’s going to be doing all sorts of things for you. For developers, Viv is going to be the next great marketplace. You’ve got app stores today but the thing that comes after app stores is this new type of marketplace. This is a marketplace that works for all the different kinds of devices that the Internet of Things will in use cases that they’ll generate and a marketplace that will become the next big area.
Tinder Co-Founder Sean Rad, in an interview on stage at the Web Summit in Lisbon, Portugal, said that he thinks that as the technology of AI advances that Siri might become a matchmaker soon:
I think the future looks nothing like what you see right now. A lot of people talk about AI and its ability to create new insights and new data, but I actually like to think about AI and its ability to create better user experiences. I’ll give you a simple picture of what I where I think not just Tinder is headed but a lot of different applications are headed. I think Siri might become a matchmaker soon.
Tinder has made it being exceptionally simpler and easier to connect with people. This is partially because it introduces a new way to double opt-in and partially because behind the scenes there’s a lot that we’re doing with AI in ensuring that we show you the best possible matches, but you could see how it could get even easier.
One day because the system is so smart in knowing the users and knowing what you want, one day Siri might say… hey Sean, there’s someone a mile away who you find attractive and we were pretty sure she finds you attractive and you both happen to like Coldplay and they’re playing in town next week. Do you want to get a coffee and if you like each other go? Siri might then create that transaction or might actually make that introduction like a traditional matchmaker.
You sort of see that as technology gets better, technology starts to disappear in our lives and starts to become a little more fluid with our daily behaviors and that creates exciting new possibilities.
What About AI-Powered Bots Making Matches?I hope not, I think that’s a scary existence. You don’t want to take the humanity out of technology.
Salesforce Co-founder & CEO Marc Benioff announced at the Dreamforce conference a new partnership with Marriott that is going to bring the power of artificial intelligence into the hotel room via a phone app. This new Marriott app will connect directly with a Salesforce Einstein powered customer database that knows the customer’s preferences and will be accessible via Siri and other voice recognition platforms.
Marriott Using Salesforce Technology to Reinvent the Hotel Experience
Every company has to rethink who they are in the fourth industrial revolution. This includes Marriott, that’s why Arne Sorenson is here, the CEO of Marriott, and you are going to see in the keynote a whole new vision for the future of Marriott. All the technology, everything they need to do to connect with their customer in a whole new way via Salesforce.
That includes the ability to check into a Marriott with your digital key right on your phone, and then the ability to talk to Siri and order your favorite sandwich. In the Salesforce customer database, we will know you want a turkey sandwich and we are going to bring it to your room. Then if you tell Siri to lower the lights and put the room temperature to 67 degrees the lights go down and the temperature adjusts.
Salesforce Einstein Radically Empowering the Customer
How does it know, where does it get all the data, how does it have the customer background, it comes from Salesforce, we’re the backend. This is what the conference is about, helping these incredible people, our trailblazers, know how to build these systems. We had to build something amazing first. This isn’t voice recognition that we did. We had to build something called Salesforce Einstein, the number one artificial intelligence platform in enterprise applications. We now are doing billions of Einstein transactions every day giving the ability for our customers to radically use artificial intelligence to make them more productive and smarter about their customers.
We are connecting Einstein to Siri or Einstein to any other voice platform, then we take that voice recognition and we are able to move it to the database. Don’t forget, when I say I want my favorite sandwich, Siri knows what I’m saying with my voice, but then we have to take that and retrieve it or insert into the customer database. That’s the magic, that’s Einstein Voice, it is the glue, the middleware that links all the voice systems that you are using in your home and your phone with the number one CRM in the world Salesforce.
Marc Benioff Say the Economy is Ripping
The economy is ripping. There is incredible demand from customers to rebuild their systems, they’re benefitting from the tax breaks, they are benefitting from a huge economy that is growing at rates they’ve never seen before. Even here in San Francisco, our unemployment now is below 3 percent. That’s amazing!
In an historic breakthrough, Microsoft’s AI team has developed technology that recognizes speech as well as humans. Their research team published a paper (PDF) showing that their speech recognition system makes errors at the same rate as a professional transcriptionists, which is 5.9%.
The IBM Watson research team published a word error rate (WER) of 6.9% earlier this year. They noted that their previous WER was 8%, announced in May 2015 and that was 36% better than previously reported external results.
Clearly, artificial intelligence technology is on a pace that will make machine word recognition superior to human word recognition in just a matter of months. Of course WER is only one way to measure and the technology must continue to improve for perfect comprehension and to prompt human level responses.
Microsoft, IBM, Apple, Google, Amazon and a host of other companies are on a mission to use AI to integrate speech recognition technology into virtually every device. In order to truly make the IoT meaningful to people, we will need to be able to communicate with them in our language. By 2020, there will be over 30 billion things connected to the internet, according to Cloudera.
“We’ve reached human parity,” said Xuedong Huang, who leads Microsoft’s Microsoft’s Advanced Technology Group and is considered their chief speech scientist. “This is an historic achievement.”
Microsoft says that the milestone will have broad implications for consumer and business products including consumer devices like Xbox and personal digital assistants such as Cortana.
“This will make Cortana more powerful, making a truly intelligent assistant possible,” notes Harry Shum, the executive vice president who heads the Microsoft Artificial Intelligence and Research group. “Even five years ago, I wouldn’t have thought we could have achieved this. I just wouldn’t have thought it would be possible.”
“The next frontier is to move from recognition to understanding,” said Geoffrey Zweig, who manages the Speech & Dialog research group.
The holy grail according to Shum is “moving away from a world where people must understand computers to a world in which computers must understand us.”
At the rate the technology is advancing, that goal now seems within reach.
Apple’s Worldwide Developers Conference (WWDC) hasn’t been formally announced yet, but iOS users have discovered that Siri is giving the date when asked.
As reported earlier by 9to5Mac, if you ask Siri, “When is WWDC?” it will tell you that the event will be held June 13 through June 17 in San Francisco.
It doesn’t say where the event will be held, but it’s typically at Moscone West and is expected to take place there once again.
Apple’ website still has info up for last year’s conference, which was held June 8 through 12. The event featured over 100 technical sessions, over 1,000 Apple engineers, hands-on labs, and the Apple Design Awards. Sessions were streamed live.
Voicemail is pretty terrible, and this is a universally-accepted opinion. Except for your mom. You mom loves leaving voicemails.
But even mom probably hates listening to voicemails. In fact, phone owners hate most sorts of talking and listening. The word is powered by texts, and that’s probably not changing for at least the foreseeable future.
It’s nice to hear, then, that Apple is looking to put Siri to work transcribing your voicemails.
Business Insider says that Apple is testing a new voicemail service called iCloud Voicemail, and it could mean that you never have to listen to a stupid voicemail ever again.
Here is how it works: When someone using iCloud Voicemail is unable to take a call, Siri will answer instead of letting the call go to a standard digital audio recorder.
iCloud Voicemail can relay information about where you are and why you can’t pick up the phone to certain people. But the coolest feature of the service is that Siri will transcribe any incoming voicemails, just as it does with anything else you say to it.
Apple sends voice data to company servers, where Siri converts the words spoken into text. iCloud Voicemail will presumably function in the same way, sending the raw voicemails to Apple, and Siri will then transcribe them and make them available on your iPhone.
Meaning Apple will send you a text version of any audio messages recorded by iCloud voicemail.
And hopefully, that text is accurate. Apple’s far from the first company (Google) to offer voicemail transcription – but the technology is far from perfect.
Apple employees are currently testing this internally, and it’s not ready for primetime. The report indicates it could launch next year, around the time iOS 10 would be arriving.
Facebook, currently on a mission to beef up its Messenger platform in a multitude of ways, may just give everyone a personal shopping assistant inside the app.
The Information reports that sources familiar with the plans are talking about Facebook’s newest addition to the Messenger lineup – a personal assistant codenamed “Moneypenny” (a obvious nod to the James Bond character).
I doesn’t appear that Facebook is building its own Siri or Cortana, but instead building the framework for a service to rival startups like Magic or Operator. The Information says “Moneypenny” will “allow users to ask real people for help researching and ordering products and services, among other tasks.”
So, Facebook might want to let users message real people, who can provide help in shopping or purchasing other services.
Since unbundling Messenger from the main Facebook app last April, Facebook has been hard at work turning Messenger into something way more than a texting app.
The company turned it into a developer platform, allowing appmakers to build directly for the platform. More germane to the news of “Monneypenny”, Facebook is trying to turn Messenger into a platform to connect businesses with their customers – for the purposes of customer service, order tracking, and more.
A few months ago, Facebook gave Messenger its own web version. More recently, Facebook has allowed games to be played inside the platform. P2P payments just went live for all users in the US.