WebProNews

Category: ITProNews

ITProNews

  • Intel is Now Warning Customers NOT to Use its Spectre Patch Due to Flaws

    Intel is Now Warning Customers NOT to Use its Spectre Patch Due to Flaws

    If your PC is running on an Intel processor, it’s best not to run those Intel patches designed to address the Spectre vulnerability just yet. Apparently, Intel’s bug-fixing patch has a bug itself causing unwanted reboots.

    Intel issued an announcement Monday confirming that it has now identified the cause of the reboot issue experience by Intel users. Last week, numerous users complained of spontaneous reboots of their computers after installing Intel’s Spectre/Meltdown patch.

    Aside from confirming that the company has zeroed in on the problem, the post by executive vice president Navin Shenoy also issued recommendations for owners of affected Intel chips.

    “We recommend that OEMs, cloud service providers, system manufacturers, software vendors, and end users stop deployment of current versions on specific platforms as they may introduce higher than expected reboots and other unpredictable system behavior.” 

    Affected Intel chips include the Broadwell, Haswell, Coffee Lake, Kaby Lake, Skylake, and Ivy Bridge series. However, some models seem to be affected by the reboot bug more than others.

    According to Shenoy, an early fix is already available, but it is still being tested by industry partners. Once confirmed to be error-free the company will ”make a final release available once that testing has been completed.”

    So what does this announcement mean for PC users with affected Intel chips? If you own a Haswell or Broadwell-based CPU and have not yet made any updates relative to the Spectre/Meltdown bug, just continue to do nothing for the time being. Intel will announce when the new bug-free patch will be available.

    The entire computing world was shocked by the discovery of Spectre and Meltdown, vulnerabilities in the system that could be exploited by hackers. Apparently, everyone’s been sitting on the bug for more than two decades before it was discovered by experts.

    [Featured image via Intel]

  • Amazon Introduces Unified Auto Scaling for AWS

    Amazon Introduces Unified Auto Scaling for AWS

    One big reason why businesses find cloud computing attractive is its scalability. But Amazon Web Services (AWS) just brought scalability to a whole new level by launching a new feature called AWS Auto Scaling, which allows clients to adjust the scaling features of multiple AWS services via a single interface.

    With cloud computing’s scalability, companies no longer need to unnecessarily spend on computing hardware that they rarely use. Cloud computing providers offer clients very flexible options; they can scale up hardware capacity in times of heavy demand as well as scale down on their computing resources allocation for times when computing demand is low.

    Realizing this enormous advantage, companies usually use multiple AWS services to handle the different aspects of their operations and applications needs. Until now, adjusting the scaling of these different AWS services were done independently. However, with the new AWS Auto Scaling feature, keeping track and tweaking the scaling of their company’s different cloud computing services with AWS should now be a breeze, as explained in a post by AWS’ Chief Evangelist Jeff Barr.

    “You no longer need to set up alarms and scaling actions for each resource and each service. Instead, you simply point AWS Auto Scaling at your application and select the services and resources of interest. Then you select the desired scaling option for each one, and AWS Auto Scaling will do the rest, helping you to discover the scalable resources and then creating a scaling plan that addresses the resources of interest.”

    Of course, the end game for this auto-scaling feature is for companies to have greater control over their desired mix of availability versus cost which would ultimately determine the number of computing resources they would get from AWS.

    As can be seen above, there are three settings on the new feature such as Optimize for availability, Balance availability and cost, Optimize cost and Custom.

    The new feature is now live in the Asia Pacific, US East (Ohio), US East (Northern Virginia), EU and US West (Oregon) regions.

    [Featured Image via Amazon Web Services]

  • Google Tries to Fill Vacant IT Jobs Through Certification Program

    Google Tries to Fill Vacant IT Jobs Through Certification Program

    Google has just rolled out an education course that aims to address the dearth of IT professionals in the United States.

    Google recently announced that it will be teaming up with Coursera to offer an IT support training program. The company is hoping that the new program will help fill in the IT shortage in the country.

    The course is dubbed the Google IT Support Professional Certificate. It’s designed to assist students with no previous IT education or training to get the relevant experience necessary to secure an entry-level job in just eight to 12 months.

    The idea for the Coursera program was the result of the best practices that came to light during Google’s in-house IT residency program, which began in 2010. According to Natalie Van Kleef Conley, the product lead for Google, the company had to contend with having openings for IT roles and not enough skilled people to fill the vacancies. It’s a situation that a lot of companies are familiar with. As a matter of fact, research shows that there are 150,000 vacant IT support jobs in the US today.

    Google previously worked with YearUp, a nonprofit that designs workforce training programs for adults in low-income families. The organization developed a program that helped prepare young professionals for entry-level IT support jobs.

    The experience also proved to Google that IT was a teachable field, and that people can be trained the fundamentals of IT in as little as eight to 12 months.

    Google’s new Coursera-backed program consists of over 64 hours of video lessons, with hands-on lab and various interactive evaluation tools. The course will cover topics like customer service and troubleshooting, automation, networking, operating systems, security, and system administration. According to Van Kleef Conley, the topics cover “all the fundamentals of IT support.”

    The course will also feature Google staff whose own experience in IT support served as a starting point in their careers.

    Google’s new IT support course will cost $49 a month. However, scholarships will be offered to those who come to the program via the non-profit organizations that Google partners with, like Goodwill, Per Scholas, Student Veterans of America, Upwardly Global, and YearUp.

    In order to complete the certificate, participants have to finish six courses. One of the already open now, with the five other courses just available for pre-enrollment. These courses will commence on January 23.

    This isn’t the first time that Google has worked with Coursera. The two companies have previously collaborated on Cloud Platform training modules for businesses. They also share the same vision when it comes to developing programs that aim to help people secure good jobs.

  • Microsoft Azure Cuts into AWS’s Market Share in 4th Quarter

    Microsoft Azure Cuts into AWS’s Market Share in 4th Quarter

    Microsoft’s Azure is finally gaining a foothold in the cloud computing niche with its market share jumping from 16 to 20 percent in the 4th quarter of 2017. The jump amounts to around $3.7 billion of its total revenue for the year. In contrast, Amazon Web Services (AWS) saw its market share drop from 68 to 62 percent in the same time frame.

    The latest industry figures were provided by analysts from KeyBanc, a Cleveland-based boutique investment bank specializing in mergers and acquisitions.

    With the cloud computing market still in its infancy, tech companies are busy positioning themselves to gain the upper hand in the lucrative niche. At the moment AWS is the dominant player, but this year will likely see many companies significantly expand their respective cloud computing divisions so as not to get left out of the emerging market.

    Microsoft, for one, has already made sizable investments in Azure, gearing up for the inevitable competition. The company recently added a number of data centers in the UK and other parts of the globe. Factoring in Microsoft’s efforts, KeyBanc expects the Azure platform to grow rapidly, projecting a massive 88 percent increase by the end of 2018.

    It’s fair to say Microsoft is on a buying spree in an effort to boost its market presence and clout. Recently, it acquired Avere Systems, a startup specializing in data storage solutions.

    But as expected, AWS will not take the challenge sitting down. Last November, it announced a new partnership with Cerner, a firm specializing in offering technology solutions for the healthcare industry.

    [Image by Azure/Facebook]

  • Nvidia Says Its New Patch is Immune to Recently Discovered Spectre Bug

    Nvidia Says Its New Patch is Immune to Recently Discovered Spectre Bug

    Last week, Nvidia ran into some problems with the recently revealed “Meltdown” and “Spectre” bugs. When the manufacturer announced that it would be releasing a new patch to address the memory-related vulnerabilities, some outlets misinterpreted the statement as an indirect admission that perhaps, even its GPU might be vulnerable.

    For those who might be wondering what Spectre and Meltdown are,  here’s a brief explanation:

    Just days after the explosive New Year celebration, the entire computing world received an even more explosive revelation—everyone’s been sitting on top of a security flaw that has existed for more than two decades.

    On January 3, Meltdown and Spectre, two security vulnerabilities present in every computer, tablet and even smartphones since 1995, were finally revealed to the world. Before you go ahead and unplugged all your computing devices, just take a deep breath and relax; everything is being taken care of. Patches are being deployed and the vulnerabilities should soon become a thing of the past.

    If you have to blame someone, blame it on our need to have our computers run faster. In 1995, manufacturers developed the idea of executing certain computer processes ahead of time, even before users inputted their choice. This process was termed “speculative execution.”  For example, if you frequently visit a particular website that requires login credentials, your computer would guess that you would want to enter your password and begin loading the necessary files. This, of course, boosts computing speed as your PC will be correct about your intentions most of the time. The problem starts when the user inputs a choice different from what the computer had predicted. The computer then has to throw away the useless data into the cache memory.

    This would not have been a problem back in those days when PCs were usually stand-alone. These days, however, everything is connected and, with this interconnectivity, researchers were actually able to prove that someone connected to the system could actually take a peak at the contents of those rejected data. While it may not necessarily contain sensitive information, there is that chance that it may indeed contain personal data like credit card numbers.

    In response to the security vulnerabilities, Nvidia decided to update the driver software of its GPUs. However, Nvidia’s move was misunderstood and some even took it as an admission that even its GPUs could be vulnerable as well.

    To address the misconceptions about its products, Nvidia CEO Jensen Huang clarified the company’s position during the Q & A portion at a press conference on Wednesday. “Our GPUs are immune, they’re not affected by these security issues,” Huang asserts. “What we did is we released driver updates to patch the CPU security vulnerability.”

    As Huang explains it, the patches are basically similar in nature to the patches provided by other tech companies and not an indication that its hardware is particularly vulnerable. “We are patching the CPU vulnerability the same way that Amazon, the same way that SAP, the same way that Microsoft, etc is patching because we have software as well,” he said.

    Summing it up, Huang declared, “I am absolutely certain that our GPU is not affected.”

    Just what level of threat do those vulnerabilities expose regular users of the internet? According to UPROXX, the general public should not be too worried about it. For starters, those vulnerabilities are very hard to exploit. In addition, there is really no indication that there were security issues that resulted in any exploitation. When you come to think of it, it even took those experts more than two decades to realize that the flaws existed.

    Everything should be fine as long as you run the fixes being prepared by various tech companies.

    [Featured image via Nvidia]

  • Google Sued By Fired Employee Alleging that White, Male Conservatives are Systematically Discriminated Against

    Google Sued By Fired Employee Alleging that White, Male Conservatives are Systematically Discriminated Against

    James Damore, the former Google engineer who was fired in 2016 after releasing a manifesto that questioned the benefits of diversity programs, has filed a discrimination lawsuit against Google. Damore also discussed his belief that Google is politically biased in the manifesto saying, “At Google, we talk so much about unconscious bias as it applies to race and gender, but we rarely discuss our moral biases. Political orientation is actually a result of deep moral preferences and thus biases. Considering that the overwhelming majority of the social sciences, media, and Google lean left, we should critically examine these prejudices.”

    Damore and his attorney Harmeet K. Dhillon conducted a live video press conference on Facebook explaining why they filed suit this morning. He stated that he thinks his lawsuit will “help make Google a truly inclusive place.”


    Here are some highlights of Damore’s manifesto: (Read it in full here)

    I value diversity and inclusion, am not denying that sexism exists, and don’t endorse using stereotypes. When addressing the gap in representation in the population, we need to look at population level differences in distributions. If we can’t have an honest discussion about this, then we can never truly solve the problem. Psychological safety is built on mutual respect and acceptance, but unfortunately our culture of shaming and misrepresentation is disrespectful and unaccepting of anyone outside its echo chamber. Despite what the public response seems to have been, I’ve gotten many personal messages from fellow Googlers expressing their gratitude for bringing up these very important issues which they agree with but would never have the courage to say or defend because of our shaming culture and the possibility of being fired. This needs to change.

  • Google’s political bias has equated the freedom from offense with psychological safety, but shaming into silence is the antithesis of psychological safety.
  • This silencing has created an ideological echo chamber where some ideas are too sacred to be honestly discussed.
  • The lack of discussion fosters the most extreme and authoritarian elements of this ideology.
  • Extreme: all disparities in representation are due to oppression
  • Authoritarian: we should discriminate to correct for this oppression
  • Differences in distributions of traits between men and women may in part explain why we don’t have 50% representation of women in tech and leadership. Discrimination to reach equal representation is unfair, divisive, and bad for business.
  • According to The Verge, Damore and another fired engineer claim that “employees who expressed views deviating from the majority view at Google on political subjects raised in the workplace and relevant to Google’s employment policies and its business, such as ‘diversity’ hiring policies, ‘bias sensitivity,’ or ‘social justice,’ were/are singled out, mistreated, and systematically punished and terminated from Google, in violation of their legal rights.”

  • How Cloud-Based ERP Can Benefit Small Businesses

    How Cloud-Based ERP Can Benefit Small Businesses

    There’s no question that small businesses have greatly benefited from today’s technology. Ten years ago, many companies would not have considered placing their enterprise resource planning (ERP) systems on a public cloud platform. But now it’s a route that more businesses are taking.

    Understanding Cloud ERP

    Cloud-based computing utilizes the Internet to administer shared computing resources, like disk storage, memory, and processing power to run numerous software applications. Meanwhile, cloud ERP is software that is accessed via the cloud. It uses the Internet to connect to servers that are hosted away from a company’s premises. This is in direct contrast to traditional ERP and business productivity software that is generally housed in the company’s headquarters.

    Benefits of Using Cloud ERP for Small Businesses

    Image via AgileTech

    Numerous businesses have turned to ERP solutions to automate their businesses, and small and medium-sized enterprises (SMEs) in particular have discovered that cloud-based ERP provides multiple benefits.

    • It’s more secure: While some companies are admittedly still worried that cloud ERP can render their data vulnerable, more and more companies are placing their trust in cloud security. This is because companies have strict security requirements, which puts cloud ERP providers under pressure to ensure that their technology is always secure.
    • It boosts productivity: Small businesses are often concerned that moving to a new technology will disrupt their work. But moving to a cloud-based platform is just a temporary inconvenience and will actually boost productivity in the long run. Cloud ERP is user-friendly and makes it easier for employees to collaborate in real-time. It also eliminates the need to get in touch with other employees just to ask for a single file since everything is accessible. And the less time is wasted on simple processes, the more time is afforded for innovation and improvement.
    • Data flow is centralized: Small businesses often develop problems once they start growing and find that the various departments and their data are housed in different areas. For instance, inventory data is kept in one software program while financial information is saved in another. Cloud ERP ensures that every relevant data is in one area, giving all authorized users access to important files and data easily and quickly.
    • It’s affordable: When it comes to capital outlay, cloud-based programs and data storage cost less compared to implementation and maintenance of an IT system housed on company premises. This holds true even when taking into account the monthly service fee that a company would pay a cloud provider. By doing away with the yearly maintenance fees and just charging per month or per user, cloud ERP becomes more affordable than systems that demand expensive licenses and need constant software and hardware upgrades.
    • Businesses become more flexible: Cloud ERP provides the accessibility, mobility, and flexibility that conventional ERPs lack. Since cloud-based ERP is managed offsite and on a system that’s always available, management will find that ordering and delegating tasks become simpler and easier.

    How to Ensure Cloud ERP Works

    Using a new system in your business is admittedly tricky. To ensure that integrating a cloud-based solution will have a positive outcome, companies should start with a dry run before going live. This process includes testing the new cloud ERP system with select employees first and ensuring that they are well trained in using the new system. This will lessen any problems that might appear once there’s a change in the infrastructure of the IT system. It also has the dual purpose of revealing which staff members will be free to manage other tasks. Company resources can then be reassigned or internal teams moved to maximize their potential.

    Small businesses that are considering using cloud ERP will need a reliable cloud provider and the know-how to optimize the technology’s best features. Once the transition to the cloud is successful, a company can enjoy higher productivity, enhanced business processes, and more success.

    [Featured image via Pixabay]

  • Using Cloud Analytics for a Winning March Madness Bracket

    Using Cloud Analytics for a Winning March Madness Bracket

    Welcome to the first of two days of low workforce production and nail biting moments that lead into 3 glorious weeks March Madness. The first round of 2017 NCAA Division I Men’s Basketball Tournament tips off today and it should be another doozy of an experience. With everyone scrambling to get their brackets in before the 12:15pm EST, hundreds of thousands bracketologists spent the last few days studying stats, algorithms, trends, match ups, or just favorite mascot to complete their winning pool entry(s).

    But for Nic Smith, Global VP of Product Marketing for Cloud Analytics at SAP, it’s all about the data and analytics. In a clever and fun way to jump into March Maddness, Nic and his SAP Data Genius team are using their company’s BusinessObjects Cloud to breakdown which teams will advance in this year’s college basketball tournament.

    Using analytic capabilities that include data preparation, modelling, data exploration, planning and what-if analysis, visual storytelling, and automated smart data discovery to uncover hidden patterns for which teams will perform best in the tournament, Nic outlines the steps his team used to create the winning bracket.

    Nic’s team is planning to share results as the rounds of the tournament unfold and provide updated analysis, what-if scenarios, and insights based on college team performance. He also challenges anyone to see if they have what it takes to beat the SAP DataGenius bracket and begin and share (trash talk) their results using #vizthemadness with @SAPAnalytics.

  • Internet of Things to Drive the Fourth Industrial Revolution: Industrie 4.0 — Companies Endorse New Interoperable IIoT Standard

    Internet of Things to Drive the Fourth Industrial Revolution: Industrie 4.0 — Companies Endorse New Interoperable IIoT Standard

    The Industrial Internet of Things (IIoT) will be the primary driver of the fourth Industrial Revolution and Cisco and other companies are at the forefront. It’s commonly referred to as Industrie 4.0.

    “Industrie 4.0 is not digitization or digitalization of mechanical industry, because this is already there,” said Prof. Dr.-Ing. Peter Gutzmer, Deputy CEO and CTO of Schaeffler AG. “Industrie 4.0 is getting the data real-time information structure in this supply and manufacturing chain.”

    “If we use IoT data in a different way we can be more flexible so we can adapt faster and make decisions if something unforeseen happens, even in the cloud and even with cognitive systems,” says Gutzmer.

    From the 2013 Siemens video below:

    “In intelligent factories machines and products will communicate with each other, cooperatively driving production. Raw materials and machines are interconnected, within an internet of things. The objective, highly flexible individualized and resource friendly mass production. That is the vision for the fourth industrial revolution.”

    “The excitement surrounding the fourth industrial revolution or Industrie 4.0 is largely due to the limitless possibilities that come with connecting everything, everywhere, with everyone,” said Martin Dube, Global Manufacturing Leader in the Digital Transformation Group at Cisco, in a blog post today. “The opportunities to improve processes, reduce downtime and increase efficiency through the Industrial Internet of Things (IIoT) is easy to see in manufacturing, an industry heavily reliant on automation and control, core examples of operational technology.”

    Connectivity between machines is vital for the success of Industrie 4.0, but it is far from simple. “The manufacturing environment is full of connectivity and communication protocols that are not interconnected and often not interoperable,” notes Dube. “That’s why convergence and interoperability are critical if this revolution is to live up to (huge) expectations.”

    Dube explains that convergence is the concept of connecting machines so that communication is possible and interoperability is the use of a standard technology enabling that communcation.

    Cisco Announces Interoperable IIoT Standard

    Cisco announced today that a number of key tech companies have agreed on an Interoperable IIoT Standard. The group, which includes ABB, BoschRexroth, B&R, Cisco, General Electric, National Instruments, Parker Hannifin, Schneider Electric, SEW Eurodrive and TTTech, is aiming for an open, unified, standards-based and interoperable IIoT solution for communication between industrial controllers and to the cloud, according to Cisco:

    ABB, Bosch Rexroth, B&R, CISCO, General Electric, KUKA, National Instruments (NI), Parker Hannifin, Schneider Electric, SEW-EURODRIVE and TTTech are jointly promoting OPC UA over Time Sensitive Networking (TSN) as the unified communication solution between industrial controllers and to the cloud.

    Based on open standards, this solution enables industry to use devices from different vendors that are fully interoperable. The participating companies intend to support OPC UA TSN in their future generations of products.

    screen-shot-2016-11-23-at-7-10-02-pm

  • Cisco: It’s Increasingly Easy to Imagine a Time When Every Device is Connected to the Internet of Things

    Cisco: It’s Increasingly Easy to Imagine a Time When Every Device is Connected to the Internet of Things

    Yves Padrines, Paris based VP, Global Service Provider EMEAR at Cisco, says that all devices will soon be connected to the internet of things (IoT).

    From the Cisco SP360: Service Provider blog:

    It’s increasingly easy to imagine a time when every device – from the street lamps on your road to the fridge in your kitchen – is connected to the internet of things. So it’s probable people will use IoT in ways we haven’t even begun to imagine.

    The automotive industry is one area where IoT is already becoming a reality. Recent research by technology consultants Chetan Sharma found that in the first quarter of 2016, there were more cars added to networks than phones (32%, compared to 31%).

    The owner of a connected car might want to subscribe to a connected vehicle care service, including options like virtual in-car assistance, sensor-based maintenance alerts, and on-board scheduling of appointments. They might also want to assess their driving safety, limit the speed a teenage driver can reach, or even monitor the health of an older family member at the wheel. And lots of organisations would be interested in the data provided by connected cars – insurance companies, emergency services, and parking providers, to name just a few.

    In the US, AT&T already has over 8 million cars on its network. AT&T used Cisco’s virtualisation technology to create a network specifically for connecting cars. They required a fundamentally new mobile architecture that would enable machine-to-machine connections. Using Cisco technology, they were able to create a network that combined virtual and physical resources.

    Of course, it isn’t just cars that can benefit from being connected. Philips has announced it sees itself as “the lighting company for the Internet of Things”, and has begun partnerships with Cisco and Vodafone. And in a further indication of IoT’s huge potential, service providers like Orange France – who last year created a low power network for machine-to-machine applications – are investing in the technology.

    Read the rest on SP360: Service Provider.

    Below is a related interview with Guillaume Gottardi, a Consulting System Engineer at Cisco Systems based in Paris, France.

    Also worth watching is Cisco’s video on their IoT advancements with General Motors cars:

  • Trump & Tech: The Clashes May Not Prove So Dramatic

    Trump & Tech: The Clashes May Not Prove So Dramatic

    Silicon Valley thinks the election of Trump is a disaster, but some tech leaders are starting to realize that the real impact may not be so dramatic.

    From Christopher Mims writing at the Wall Street Journal:

    Mr. Srinivasan (Balaji Srinivasan, Partner at venture capital firm Andreessen Horowitz) views the collision between tech culture and Mr. Trump’s populist movement as inevitable, and potentially so divisive that tech’s global elites should effectively secede from their respective countries, an idea he calls “the ultimate exit.”

    “It’s crazy to me that people in Silicon Valley have no idea how half the country lives and is voting,” said Ben Ling, an investment partner at venture firm Khosla Ventures. Many “coastal elites” attribute the results “to just sexism or racism, without even trying to figure out why [people] wanted to vote for Trump.”

    Ultimately, the clashes may not prove so dramatic. Technology may fall short of visionaries’ lofty promises. And Mr. Trump may pursue policies that are more symbolic than detrimental to the tech industry, says Anshu Sharma, a venture capitalist at Storm Ventures and founder of artificial-intelligence startup Learning Motors.

    “We’ll eventually find out whether he decides he does want to bring back an Apple factory from China,” says Mr. Sharma. “I think he’s going to pick on one or two companies and make an example, to show his base that he’s fixing America.”

    Read the rest of the story at the Wall Street Journal.

  • The State of Artificial Intelligence at Facebook

    The State of Artificial Intelligence at Facebook

    When you think of Facebook, you think of data, but not so much technology. Get ready for an in-depth preview of how Facebook is and is further planning to use artificial intelligence and other key technologies that they see as critical to their future.

    “Facebook’s long-term roadmap is focused on building foundational technologies in three areas: connectivity, artificial intelligence and virtual reality,” wrote Mike Schroepfer, Facebook’s Chief Technology Officer. “We believe that major research and engineering breakthroughs in each of these areas will help us make more progress toward opening the world to everyone over the next decade.”

    Facebook Making AI Research Useful Now

    Tying all of these crucial technology projects together is AI. “We’re conducting industry-leading research to help drive advancements in AI disciplines like computer vision, language understanding and machine learning,” he says. “We then use this research to build infrastructure that anyone at Facebook can use to build new products and services. We’re also applying AI to help solve longer-term challenges as we push forward in the fields of connectivity and VR. And to accelerate the impact of AI, we’re tackling the furthest frontiers of research, such as teaching computers to learn like humans do — by observing the world.”

    Facebook has learned to quickly turn their AI research into productive uses by their development and production teams. “As the field of AI advances quickly, we’re turning the latest research breakthroughs into tools, platforms and infrastructure that make it possible for anyone at Facebook to use AI in the things they build,” said Schroepfer.

    The backbone of their AI product development is FBLearner Flow, which allows their coders to easily reuse algorithms across products. Also, very importantly considering the billions using the Facebook platform, it lets their developers run thousands of experiments to test scaling.

    Another advancement is AutoML, which according to Facebook automatically applies the results of each test to other machine learning models to make them better. This significantly improves the time to market on AI breakthroughs.

    A brand new product that they developed based on their AI research is Lumos, a self-serve platform that enables teams that haven’t been exposed to the technology a way to “harness the power of computer vision” for their products and services.

    How is AI Impacting Facebook’s Users?

    Facebook is quickly and sometimes quietly adding AI capabilities right into the products that Facebook users love. For instance, AI is used to help translate posts automatically from foreign language speaking friends. It also behind the one time controversial ranking of everyone’s News Feed. Remember when this used to be in chronological order?

    Facebook says that over the next 3-5 years AI will be used in features all across the platform.

    AI Being Used to Improve Aspects of Video

    Facebook sees video communication as its future and is working on ways to use AI to improve this experience.

    “We started working on style transfer, a technology that can learn the artistic style of a painting and then apply that style to every frame of a video,” said Schroepfer. “This is a technically difficult trick to pull off, normally requiring that the video content be sent to data centers for the pixels to be analyzed and processed by AI running on big-compute servers. But the time required for data transfer and processing made for a slower experience. Not ideal for letting people share fun content in the moment.”

    “Just three months ago we set out to do something nobody else had done before: ship AI-based style transfer running live, in real time, on mobile devices,” he added. “This was a major engineering challenge, as we needed to design software that could run high-powered computing operations on a device with unique resource constraints in areas like power, memory and compute capability.”

    All of this work has resulted in a new deep learning platform called Caffe2Go, which can capture, analyze and process pixels in real time on a mobile device, according to Facebook.

    “We found that by condensing the size of the AI model used to process images and videos by 100x, we’re able to run deep neural networks with high efficiency on both iOS and Android. This is all happening in the palm of your hand, so you can apply styles to videos as you’re taking them.”

  • IBM: Watson to Predict the Future and Truly Change the World

    IBM: Watson to Predict the Future and Truly Change the World

    Within 10 years, IBM believes that it’s artificial intelligence driven Watson will literally predict the future. “It won’t be long before Watson is predicting the future,” said Dr. John E. Kelly III, IBM senior vice president of Cognitive Solutions and IBM Research, at the World of Watson conference in Las Vegas yesterday. “Doctors, for example, may use Watson to help predict when a diabetic patient is about to have a blood sugar spike.”

    He added, “When that happens, then we truly, truly, have changed the world.”

    “The technology is not even moving fast,” he says. “It’s accelerating. It’s moving faster and faster every day. Honestly, it blows my mind and I’m an optimist.”

    Can Watson ever become creative? He noted the difficulty of that question and then realized that Watson is already being creative with Chef Watson and Cognitive music, where it’s actually writing songs.

    We are potentially reaching a world-changing moment with AI and its capability to think, create and even predict.

  • Google Cuts Fiber and Division Head Resigns

    Google Cuts Fiber and Division Head Resigns

    Google is cutting its losses with its high speed internet service to restrategize and reduce expenses. It will continue serving its current fiber cities and will complete the buildout of its fiber service where construction has already begun, but it will close offices and end all future fiber plans.

    With this announcement, the head of Google’s fiber division, Craig Barratt, announced his resignation. “As for me personally, it’s been quite a journey over the past few years, taking a broad-based set of projects and initiatives and growing a focused business that is on a strong trajectory. And I’ve decided this is the right juncture to step aside from my CEO role. Larry has asked me to continue as an advisor, so I’ll still be around.”

    Google currently has fiber in 9 locations; Kansas City MO, Kansas City, KS, Atlanta, Austin, Charlotte, Nashville, Provo, Salt Lake City and The Triangle area of North Carolina. They list 4 cities as upcoming; Huntsville, AL, Irvine, CA, Louisville, and San Antonio. It’s likely that none of these cities will see their Google fiber dreams fulfilled.

    Another big disappointment is in store for cities that were listed as potential fiber cities by Google including Tampa, Jacksonville, Chicago, Dallas, Oklahoma City, Phoenix, San Diego, Los Angeles, San Jose and probably Louisville. Google plans to close offices in all of these cities and layoff personnel.

    “In terms of our existing footprint, in the cities where we’ve launched or are under construction, our work will continue,” said Barratt. “For most of our potential Fiber cities — those where we’ve been in exploratory discussions — we’re going to pause our operations and offices while we refine our approaches. We’re ever grateful to these cities for their ongoing partnership and patience, and we’re confident we’ll have an opportunity to resume our partnership discussions once we’ve advanced our technologies and solutions.”

    Barratt added that they will be reducing their employee base in cities that are in an “exploratory stage.”

  • Microsoft Democratizing AI with Cognitive Toolkit Release

    Microsoft Democratizing AI with Cognitive Toolkit Release

    Microsoft is changing folks, it’s no longer the hated technology company that hoards power and technology. Today, Microsoft released to developers an updated version of the Microsoft Cognitive Toolkit that uses deep learning so that computers, using huge data sets, can learn on their own.

    For instance, developers could feed CPUs and NVIDIA® GPUs millions of images of vegetables and it would learn over time which are cucumbers, no matter how distorted and different they appear. It will match was is similar and over time become very good add it. This matching and learning technology is applicable to an infinite number of software solutions.

    It’s free, easy-to-use and open-sourced, that Microsoft says trains deep learning algorithms to learn like the human brain. In fact, it’s helping to change the world while simultaneously changing Microsoft.

    “This is an example of democratizing AI using Microsoft Cognitive Toolkit,” says Xuedong Huang, who is Microsoft’s Chief Speech Scientist.

    Microsoft originally created the Toolkit for internal use. “We’ve taken it from a research tool to something that works in a production setting,” noted Frank Seide, a principal researcher at Microsoft Artificial Intelligence and Research and a key architect of Microsoft Cognitive Toolkit.

    The current version of the toolkit can be downloaded via GitHub with an open source license. It includes new functionality letting developers use Python or C++ programming languages and allows researchers to do a type of artificial intelligence work called reinforcement learning.

    The latest version is also much faster when adding big datasets from multiple computers, which is absolutely necessary in implementing deep learning across multiple GPUs. This allows developers to create smart AI enabled consumer products and enables manufacturers to connect more smart devices, empowering the IoT revolution.

    Deep learning, according to Microsoft, is an AI technique where large quantities of data, known as training sets, literally teach computer systems to recognize patters from huge quanities of images, sounds or other data.

    screen-shot-2016-10-25-at-7-40-07-pm

    Just last week, Microsoft announced an historic voice recognition breakthrough, reaching virtual parity with human speech. Microsoft’s AI team credited Microsoft Cognitive Toolkit speed improvement in allowing them to reach this level of performance so soon.

  • Historic Breakthrough: Microsoft Reaches Virtual Parity With Human Speech

    Historic Breakthrough: Microsoft Reaches Virtual Parity With Human Speech

    In an historic breakthrough, Microsoft’s AI team has developed technology that recognizes speech as well as humans. Their research team published a paper (PDF) showing that their speech recognition system makes errors at the same rate as a professional transcriptionists, which is 5.9%.

    The IBM Watson research team published a word error rate (WER) of 6.9% earlier this year. They noted that their previous WER was 8%, announced in May 2015 and that was 36% better than previously reported external results.

    Clearly, artificial intelligence technology is on a pace that will make machine word recognition superior to human word recognition in just a matter of months. Of course WER is only one way to measure and the technology must continue to improve for perfect comprehension and to prompt human level responses.

    Microsoft, IBM, Apple, Google, Amazon and a host of other companies are on a mission to use AI to integrate speech recognition technology into virtually every device. In order to truly make the IoT meaningful to people, we will need to be able to communicate with them in our language. By 2020, there will be over 30 billion things connected to the internet, according to Cloudera.

    “We’ve reached human parity,” said Xuedong Huang, who leads Microsoft’s Microsoft’s Advanced Technology Group and is considered their chief speech scientist. “This is an historic achievement.”

    Microsoft says that the milestone will have broad implications for consumer and business products including consumer devices like Xbox and personal digital assistants such as Cortana.

    “This will make Cortana more powerful, making a truly intelligent assistant possible,” notes Harry Shum, the executive vice president who heads the Microsoft Artificial Intelligence and Research group. “Even five years ago, I wouldn’t have thought we could have achieved this. I just wouldn’t have thought it would be possible.”

    “The next frontier is to move from recognition to understanding,” said Geoffrey Zweig, who manages the Speech & Dialog research group.

    The holy grail according to Shum is “moving away from a world where people must understand computers to a world in which computers must understand us.”

    At the rate the technology is advancing, that goal now seems within reach.

  • By 2020 There Will Be 30 Billion Connected Things!

    By 2020 There Will Be 30 Billion Connected Things!

    By 2020 there will be 30 billion connected things, according to a new infographic by Cloudera. The company believes that the Internet of Things (IoT) could be an extremely disruptive force in society, basically changing everything, according to Vijay Raja, Sr. Solutions Marketing Manager for Cloudera IoT.

    “With billions of things — including everything from cars, homes, airplanes, apparels, parking meters, wearables, factories, oil rigs, and heavy machinery — connected to the internet, the Internet of Things (IoT) has the potential to be the most disruptive technological advances in recent ages.”

    IoT is Really About the Data

    Despite the name, IoT is less about things and more about the data that these things will be generating, with data streams in real-time that will need to be processed, calculated and computed into action items for both other things and humans.

    “However, IoT is going to be much more than the things itself — IoT is really going to be all about the data!” says Raja. “IoT will generate far greater volume and variety of data than most information leaders are currently familiar with — requiring re-architecting of their existing data management infrastructures. Organizations around the globe will need to adopt a more scalable, agile, and open data management architecture in order to effectively ingest, store, manage, process and, more importantly, drive insights from all of their IoT data.”

    iot-infographic

  • CEO Tim Cook Pushes Cashless Society and Predicts AI in all Apple Products

    CEO Tim Cook Pushes Cashless Society and Predicts AI in all Apple Products

    Apple CEO Tim Cook recently visited Japan for the opening of their state-of-the-art research and development facility. He described it as a “center for deep engineering that will be very different from the R&D base Apple plans to build in China.”

    “I cannot tell you the specifics,” Apple CEO Tim Cook told Nikkei Asian Review while riding on a bullet train in Japan. “The specific work is very different.”

    According to the report Cook is seeking to integrate artificial intelligence into all of their product offerings including presumably the iPhone. “AI is horizontal in nature, running across all products and is used in ways that most people don’t even think about.”

    “We want the AI to increase your battery life and recommend music to Apple Music subscribers,” he said. “AI could (even) help you remember where you parked your car.”

    “Smartphones are 9 years old,” he said. “We are not even teenagers yet. We just got going. I think there is an incredible future ahead.”

    Apple Seeks to Promote Cashless Society

    Apple’s CEO sees Apple Pay as the beginning of the end of cash worldwide. “We would like to be a catalyst for taking cash out of the system,” said Cook. “We don’t think the consumer particularly likes cash.”

    Likely, Cook is talking about what everyone predicts, we are on a course toward a cashless society over the next 20-40 years and Apple would like to be a leader in making that happen.

    Cook said Apple intends to increase their push into mobile payments in Japan and all of Asian, with Apple Pay finally coming to Japan later this month. The iPhone 7 will become their first mobile phone to work with Japan’s FeliCa contactless payment system. “Japan is important to us,” said Cook. “FeliCa was born in Japan and so by extension, FeliCa is important.”

    Of course, it’s not the first time that Tim Cook has predicted the end of cash. “Your kids will not know what money is,” Cook told students at Trinity College in Dublin last year.

  • Microsoft Ends Moore’s Law, Builds a Supercomputer in the Cloud

    Microsoft Ends Moore’s Law, Builds a Supercomputer in the Cloud

    A group of Microsoft engineers have built an artificial intelligence technique called deep neural networks that will be deployed on Catapult by the end of 2016 to power Bing search results. They say that this AI supercomputer in the cloud will increase the speed and efficiency of Microsoft’s data centers and that their will be a noticeable difference obvious to Bing search engine users. They say that this is the “The slow but eventual end of Moore’s Law.”

    “Utilizing the FPGA chips, Microsoft engineering (Sitaram Lanka and Derek Chiou) teams can write their algorithms directly onto the hardware they are using, instead of using potentially less efficient software as the middle man,” notes Microsoft blogger Allison Linn. “What’s more, an FPGA can be reprogrammed at a moment’s notice to respond to new advances in artificial intelligence or meet another type of unexpected need in a datacenter.”

    The team created this system that uses a reprogrammable computer chip called a field programmable gate array (FPGA) that will significantly improve the speed of Bing and Azure queries. “This was a moonshot project that succeeded,” said Lanka.

    What they did was insert an FPGA directly between the network and the servers, which in bypassing the traditional software approach speeds up computation. “What we’ve done now is we’ve made the FPGA the front door,” said Derek Chiou, one of the Microsoft engineers who created the system. ““I think a lot of people don’t know what FPGAs are capable of.”

    Here is how the team described the technology:

    HyThe Cataputl Gen2 Card showing FPGA and Network ports enabling the Configurable Cloudperscale datacenter providers have struggled to balance the growing need for specialized hardware (efficiency) with the economic benefits of homogeneity (manageability).  In this paper we propose a new cloud architecture that uses reconfigurable logic to accelerate both network plane functions and applications.  This Configurable Cloud architecture places a layer of reconfigurable logic (FPGAs) between the network switches and the servers, enabling network flows to be programmably transformed at line rate, enabling acceleration of local applications running on the server, and enabling the FPGAs to communicate directly, at datacenter scale, to harvest remote FPGAs unused by their local servers.

    We deployed this design over a production server bed, and show how it can be used for both service acceleration (Web search ranking) and Hardware and Software compute planes in the Configurable Cloudnetwork acceleration (encryption of data in transit at high-speeds).  This architecture is much more scalable than prior work which used secondary rack-scale networks for inter-FPGA communication.  By coupling to the network plane, direct FPGA-to-FPGA messages can be achieved at comparable latency to previous work, without the secondary network.  Additionally, the scale of direct inter-FPGA messaging is much larger.  The average round-trip latencies observed in our measurements among 24, 1000, and 250,000 machines are under 3, 9, and 20 microseconds, respectively.   The Configurable Cloud architecture has been deployed at hyperscale in Microsoft’s production datacenters worldwide.

  • Google Seeks to Transform Education with New Change Center

    Last week the Google for Education Transformation Center was announced as a hub and launch pad for school change. Google has long been involved in pushing technology to improve and modernize education, but with the launch of the Center it hopes to spur a community of educational thought leaders to action.

    Persuading schools to adopt innovative technology is not as simple as it may seem, there are often cultural obstacles that need to be overcome and leadership that embraces positive change and strategies to bring all stakeholders on board through transparency and learning programs.

    Technology can’t just be forced on educators, it needs to be first embraced as a solution by educators to a perceived problem. Finally, school leaders need to budget for change and improvement, so that technology is about student progress and not funding reallocation.

    “Over the past few years we’ve had the privilege to work closely with thousands of schools that are seeking to improve and innovate with the help of technology,” said Liz Anderson, who is Google’s Global Lead for their Education Adoption Programs. “Every school is different, but we’ve heard a lot of common themes from educators: that change is hard; that change is about a whole lot more than just technology; and that obstacles are often similar across districts.”

    She added, “School leaders face many of the same challenges and opportunities, but often have limited ways to share with — and learn from — each other. That’s why we’ve created a new hub for school leaders to share ideas, resources, and stories: The Google for Education Transformation Center.”

    The 7 Critical Areas for School Change

    Google brought educational leaders together from all over the US to create a “transformation framework” guiding schools to improving education through innovation and technology:

    • Vision – School change only happens when there is a strong vision at the start. When a school has a clear vision, it means the leader has ensured that the school and wider community are working together toward shared goals for the future.
    • Learning – School leaders empower their teams to create a set of instructional practices, curricula, assessments, and learning experiences that put students at the center – that engage learners deeply and meet their individual and collective needs.
    • Culture – Successful school leaders create structures, rituals, stories, and symbols that foster a culture of innovation and encourage people to learn from failure and success.
    • Technology – Technology is only one enabler of school change, but it’s a critical part. School leaders find, test, and gain their team’s support for the right technology (tools and processes) to meet the school’s vision.
    • Professional Development – Teachers have a lot on their plates. School leaders provide educators with effective professional development and ongoing coaching focused on applying tools and practices to meet student needs.
    • Funding & Sustainability – School leaders create a sustainable budget, identify a range of funding sources, and seek savings and reallocation opportunities that align directly to student goals.
    • Community – Schools serve diverse communities made up of parents, families, businesses, government, nonprofits, and residents. Throughout all stages of the transformation process, leaders ensure these partners support the school and the vision.

    Rich Ord is Co-founder of StudentGrowthWorks.org, a technology platform for monitoring student growth and making IEP’s meaningful.

  • One of the Largest DDoS Attack Ever Seen Kills Krebs Security Site

    One of the Largest DDoS Attack Ever Seen Kills Krebs Security Site

    One of the largest Denial of Service (DDoS) attacks ever seen on the internet has caused Akamai to dump a site it hosted, KrebsOnSecurity.com. The DDoS attack was apparently in retaliation for journalist Brian Krebs‘ recent article about vDOS, which is allegedly a cyberattack service. According to BI following Krebs reporting two Israeli men were arrested. and the site was taken down.

    One Twitter post noted the irony in a security expert having his site taken down because of a DDoS attack. “Brian Krebs, the man who gives cybercriminals nightmares, has been hit with a Godzilla-sized DDoS attack,” noted cybercrime researcher, blogger and speaker, Graham Cluley, “Sad news, hope he’s back soon.”

    The Attack Was Huge

    Before his site was take down Krebs posted about the attack on his website saying that KrebsOnSecurity.com was the target of an extremely large and unusual distributed denial-of-service (DDoS) attack designed to knock the site offline. “The attack did not succeed thanks to the hard work of the engineers at Akamai, the company that protects my site from such digital sieges. But according to Akamai, it was nearly double the size of the largest attack they’d seen previously, and was among the biggest assaults the Internet has ever witnessed.”

    Later Akamai did take down the site and Krebs was understanding:

    “The attack began around 8 p.m. ET on Sept. 20, and initial reports put it at approximately 665 Gigabits of traffic per second,” writes Krebs. “Additional analysis on the attack traffic suggests the assault was closer to 620 Gbps in size, but in any case this is many orders of magnitude more traffic than is typically needed to knock most sites offline.”

    Krebs said that Martin McKeay, Akamai’s senior security advocate, told him that this was the largest attack that they had seen. Earlier this year they clocked an attack at 363 Gbps, but there was a major difference: This attack was launched by a “very large” botnet of hacked devices, where typical DDoS attacks use the common amplifying technique that bulks up a small attack into a large one.

    Krebs last tweets about the attack: