WebProNews

Tag: Amazon Web Services

  • We Are Iterating At a Faster Clip, Says Amazon Web Services CEO

    We Are Iterating At a Faster Clip, Says Amazon Web Services CEO

    “We have a pretty significant market segment leadership position in this infrastructure cloud computing space,” says AWS CEO Andy Jassy. “There are a few reasons for it. The first is we just have much more functionality by a large amount than anybody else. We are also iterating at a faster clip. When you actually look at the details that gap in functionality is widening.”

    Andy Jassy, CEO of Amazon Web Services, discusses how AWS began as the leader in the infrastructure cloud computing space and how that gap in functionality is widening in an interview with Kara Swisher at the 2019 Code Conference:

    In My Wildest Dreams, I Never Imagined a 6-Year Head Start

    We were trying just to get to launch (2006) without our friends across the lake (Microsoft) knowing about it. At that time Amazon was not known as a technology provider to companies. We felt it was really important for us to be first to market to have a chance to be successful. I was hoping we could just get to launch without anybody else knowing and beating us to the market. In my wildest dreams, of the many surprises we had, I never imagined we would have a six-year head start. I don’t know exactly why others didn’t follow. I think for some of the older guard technology companies our model was very disruptive to their existing businesses. It’s really hard to cannibalize yourself.

    I think they kind of wished it away (Oracle, IBM, etc.). It’s hard when you have an existing business that’s working it’s hard to cannibalize it with a product with a much lower margin. I think that some of the other players probably were distracted by some of the other things they were working on. Then their initial attempt at the business turned out to be the wrong abstraction. It turned out to be a higher abstraction when builders really wanted the individual building blocks to construct and stitch together however they saw fit.

    We Are Iterating At a Faster Clip

    We have a pretty significant market segment leadership position in this infrastructure cloud computing space. There are a few reasons for it. The first is we just have much more functionality by a large amount than anybody else. We are also iterating at a faster clip. When you actually look at the details that gap in functionality is widening. That turns out to really matter if you’re an enterprise or a government who is going to move all their applications to the cloud. Or if you want to be able to unleash your builders to build anything they can imagine.

    The second thing is we just have a much larger ecosystem of partners around our platform. It’s not just the thousands of systems integrators who build practices on AWS. Most ISVs and SAAS providers will adapt their software to work on one technology infrastructure platform, few will do two and hardly any will do three. They all start on AWS just because of our leadership position. You get to move to the cloud with a lot more of the software that you want to use.

    The third thing that is pretty different is that we’re just at a different operating maturity than these other providers having been at six years longer. It turns out it’s really different running large scale infrastructure for yourself and your company where you get to tell everybody the way it’s going to be it is for running it for millions of external customers with every imaginal use case all over the world where they get to use you without any warning. It just forces a different kind of operating discipline and rigor. You can see that borne out in the operational performance.

    Microsoft Is the Clear Number Two Player Right Now

    We are pretty early in this space right now. It the infrastructure technology space I don’t think there are going to be 25 winners because scale really matters. But there’s not going to be just one. The market segments that we address in infrastructure software, hardware, and data center services globally is trillions of dollars ultimately. There are going to be several successful players. I do believe that Microsoft will have a business there. They are building the business and they are the clear number two player at this point. I think there will be other players who are successful as well. As for Google, I think they are working at it.

    In all of our businesses, there are startups that none of us know about today that have the ability to disrupt. If you think the technology changes in the last ten years have been disruptive, and I think they have been unbelievably dynamic, I think the next ten years is going to be faster than the last ten. There are all kinds of new technology that will evolve that will give people a chance to build businesses and pursue various segments. I don’t know exactly who they will be in our space, but I’m confident there will be.

    We Are Iterating At a Faster Clip, Says Amazon Web Services CEO Andy Jassy
  • AWS CEO: Cloud is Still Really Early Days

    AWS CEO: Cloud is Still Really Early Days

    “It’s still really early days,” says Amazon Web Services CEO Andy Jassy speaking about the cloud. “Sometimes we remind ourselves that even though it’s a $30 billion revenue run rate business growing 45 percent year-over-year, it’s the early stages of enterprise and public sector adoption in the US. Outside the US they’re 12 to 36 months behind depending on the country and industry.”

    Jassy says that although price is the conversation starter, speed and agility are the primary reasons that enterprises are moving to the cloud. He says that most startups have built their businesses from scratch on top of AWS. Some of the big examples, he notes, are Lyft, Airbnb, Pinterest, Slack, Tomo, and Robinhood.

    Andy Jassy, CEO of Amazon Web Services (AWS), discusses how the cloud is still really in the early days in an interview with Jim Cramer on CNBC:

    Cloud is Still Really Early Days

    Sometimes we remind ourselves that even though it’s a $30 billion revenue run rate business growing 45 percent year-over-year, it’s the early stages of enterprise and public sector adoption in the US. Outside the US they’re 12 to 36 months behind depending on the country and industry. It’s still really early days. The conversation starter when people move to the cloud is always cost. Instead of laying out all that capital for data centers and servers and instead only spend what you consume that’s usually very advantageous.

    Capital expense turns to variable expense and variable expense is much lower than what most companies can do on their own because we have such a large scale that we pass on to customers in the form of lower prices. We’ve lowered our prices on 70 different occasions in the last ten years. You get real elasticity. You provision what you need and if it turns out you need more, you provision more. If you don’t need anymore because you’re at the peak you just give it back and stop paying for it.

    Primary Reason Enterprises Move to Cloud is Speed and Agility

    Price always is the conversation starter but the number one reason that enterprises are moving is speed and agility. Usually, if you want to try an experiment in your company it takes 10 to 12 weeks to get a server and then you have got to build all the infrastructure software around it. In the cloud, you can provision thousands of servers and minutes. Then because we have 165 services that you can use in whatever combination you want you can get from an idea to implementation in several orders of magnitude faster. You can innovate much quicker.

    As an example, what Lyft is doing in the space is pretty amazing and the piece that they’re growing at is really amazing. To be able to scale the way they have, first as a start-up and then a fast growing business, and then what they would tell you is that they’re able to invent and change the customer experience so quickly, several orders of magnitude faster than they could if they were doing on premise that it’s really helped build their business.

    The Cloud Encourages Innovation

    The vast majority of applications in the next five to ten years will be infused with some sort of machine learning. We are in kind of a golden age of computing. Almost every company that we speak with is interested most importantly in being able to take their own data. Most companies have gobs of data. Even startups today have gobs of data. But it’s so hard to know what’s in there and it’s so hard to know what the gems are and it’s so hard to know what’s going to be the predictive pieces that change the customer experience. Our machine learning capabilities are going to solve that for a lot of customers.

    If you are building technology applications and trying to build consumer experiences, you want to do as much as you can for as little money as possible. Then when you have ideas you want to be able to move fast. One of the things that happen at companies that build on the cloud is it used to be so hard to get anything done that none of your employees spent any time outside of work thinking about new ideas, because why bother. It was so demoralizing that you never get to try it.

    With the cloud, you can provision instances and servers in minutes so people spend their free time thinking about new customer experiences. They know that if they come up with something over the weekend they can come in Monday and try it for a dollar. It changes how many people in your company think about innovation and where you get new ideas from throughout the company.

    Most Startups Have Built Their Businesses on Top of AWS

    Most startups have built their businesses from scratch on top of AWS. Some of the big examples are companies like Lyft, Airbnb, Pinterest, Slack, Tomo, and Robinhood. There is a very large number of them. But there are a lot of businesses that either haven’t gotten big yet or are just trying to build a business. One of the interesting things that happened I remember in 2007-2008 when we had the recession. There were all these very gloomy emails sent from a lot of venture capitalists saying don’t expect to get funded, but the number of startups kept growing.

    As opposed to having to go raise money to pay for data centers and servers people can try several instantiations of their idea on top of AWS and if it isn’t getting traction you paid something like 80 cents a month or a $1.50 a month, whatever your usage is. We have loads of companies that are trying to build businesses on top of us that really only pay anything meaningful when they have traction.

    Amazon More Focused on Long Term Than Most Companies

    It’s always hard for me to measure the impact we have on the overall world. The way we think about it at Amazon is that in every single one of our businesses we have never met customers who don’t want prices to go down. If the center of your gravity is customers, which it is in every single one of Amazon’s businesses, you’re always working relentlessly to find ways to take cost out of your structure so you can give it back to customers in the form of lower prices. It’s actually really easy to lower prices. It’s much harder to be able to afford to lower prices.

    We’re much more focused on the long term than most companies. We are trying to build a business and a set of customer relationships that outlasts all of us. As such, we think if we help our customers get more done and are successful on their own, even if it means lower margin percentages, over time we’ll drive more absolute margin dollars. They’ll be more successful and we will ultimately be more relevant.

    It Takes Work to Actually Move Away From Oracle

    I think Larry (Ellison of Oracle) has a certain view of the world that isn’t always steeped in what the facts are. If you look at Amazon, we started the company at a very early stage and we had Oracle. It takes work to actually move away from Oracle. Lots of customers are learning this as so many people are trying to move away from the commercial-grade legacy database providers like Oracle or SQL server to newer engines like Aurora.

    We now are 88 percent of the way through moving all of our Oracle databases and will be at 100 percent by mid this year. We turned off our Oracle data warehouse in November of last year and moved it to Redshift. We learned some very interesting patterns that customers are very excited about copying. We don’t really meet a lot of customers who aren’t looking to move away from those databases to Aurora.

    AWS CEO: Cloud is Still Really Early Days


  • Cloud is Really the New Normal for Financial Services

    Cloud is Really the New Normal for Financial Services

    “Cloud is really the new normal,” says Scott Mullins, Head of Worldwide Financial Services Business Development at AWS. “If you look across enterprise companies and financial services today, the vast majority are considering cloud as a major part of their IT strategy going forward. It’s just picked up that much momentum. I think we’re just scratching the surface in cloud for the industry.”

    Scott Mullins, Head of Worldwide Financial Services Business Development at Amazon Web Services recently discussed how cloud has become a major part of every financial organization’s IT strategy:

    Financial Organizations Are Moving to the Cloud

    I get to actually lead a team of financial services experts whose sole function is to help our customers both from the standpoint of FinTech startups, all the way up to the largest banks, broker-dealers, exchange companies, and insurers use our tools. That’s what we do on a daily basis and we’re having a lot of fun doing it. It’s really fun to watch.

    I think the big stories in 2019 are going to probably be a couple things. The first thing is if we look historically back at the last several re:INVENT’s we’ve seen more financial institutions coming forward and talking about what they’re doing in the cloud. I think the reason for that is because we’re getting more muscle memory from these organizations.

    2019 Will Bring an Accelerated Transformation

    They’ve had experimentation, they’ve had some foundations they’ve been laying over the course of the last couple of years, and now they have confidence. They have confidence to do two things. Number one to move much more quickly to embrace these tools and to move more workloads over and to build net new things, but also to talk about it. Most financial institutions don’t want to talk about something until they know it well and they know it works for them and that they’ve really de-risked it for themselves.

    We saw Goldman Sachs last year. This year we saw Guardian Life Insurance talking about how they’ve changed the 158-year-old company and how they made it nimble and agile. They’ve actually been able to close data centers. I think we are going to see more of that. What that means is we’re going to see a much more accelerated transformation of the industry itself. I think we’re going to see more and more of those organizations coming out and talking about how cloud is a major part of their IT strategy going forward.

    Going to See a Much Richer Ecosystem of ISVs

    The second thing I think we’re going to see is a much richer ecosystem of ISVs. Just look across what we have today and what’s been announced this week. Bloomberg came out talking about B-Pipe on AWS. Refinitiv a couple of weeks ago was talking about the fact that Elektron runs on AWS. We’re working very closely with Broadridge. We’re working closely with Finical and Temenos and a lot of different vendors in the industry and that’s going to continue to happen at a rapid pace.

    Financial Industry Undergoing Massive Transformation

    The reason for that is twofold. Number one, you’ve got a lot of those customers who are going through massive transformations and they’re saying to their ISPs, I love the relationship that we have but I’m moving to the cloud. If we’re going to continue to have a relationship you’ve got to move to the cloud with me and those vendors are responding very positively.

    Or you’ve got some vendors like IHS Markit who several years ago said, you know what, the future of financial services is in the cloud and I need to start moving before even my customers are telling me so that I can be ahead of the game. Those are two things you’re going to see be very key themes in 2019.

    Cloud is Really the New Normal

    Cloud is really the new normal. If you look across enterprise companies and financial services today, the vast majority are considering cloud as a major part of their IT strategy going forward. It’s just picked up that much momentum. I think we’re just scratching the surface in cloud for the industry. There’s going to be a room for not just one cloud provider, but multiple cloud providers and opportunities for everyone.


  • AWS CEO Announces Textract to Extract Data Without Machine Learning Skills

    AWS CEO Announces Textract to Extract Data Without Machine Learning Skills

    AWS CEO Andy Jassy announced Amazon Textract at the AWS re:Invent 2018 conference. Textract allows AWS customers to automatically extract formatted data from documents without losing the structure of the data. Best of all, there are no machine learning skills required to use Textract. It’s something that many data-intensive enterprises have been requesting for many years.

    Amazon Launches Textract to Easily Extract Usable Data

    Our customers are frustrated that they can’t get more of all those text and data that are in documents into the cloud, so they can actually do machine learning on top of it. So we worked with our customers, we thought about what might solve these problems and I’m excited to announce the launch of Amazon Textract. This is an OCR plus plus service to easily extract text and data from virtually any document and there is no machine learning experience required.

    This is important, you don’t need to have any machine learning experience to be able to use Textract. Here’s how it generally works. Below is a pretty typical document, it’s got a couple of columns and it’s got a table in the middle of the left column.

    When you use OCR it just basically captures all that information in a row and so what you end up with is the gobbledygook you see in the box below which is completely useless. That’s typically what happens.

    Let’s go through what Textract does. Textract is intelligent. Textract is able to tell that there are two columns here so actually when you get the data and the language it reads like it’s supposed to be read. Textract is able to identify that there’s a table there and is able to lay out for you what that table should look like so you can actually read and use that data in whatever you’re trying to do on the analytics and machine learning side. That’s a very different equation.

    Textract Works Great with Forms

    What happens with most of these forms is that the OCR can’t really read the forms or actually make them coherent at all. Sometimes these templates will kind of effectively memorize in this box is this piece of data. Textract is going to work across legal forms and financial forms and tax forms and healthcare forms, and we will keep adding more and more of these.

    But also these forms will change every few years and when they do something that you thought was a Social Security number in this box turns out now not to be a date of birth. What we have built Textract to do is to recognize what certain data items or objects are so it’s able to tell this set of characters is a Social Security number, this set of characters is a date of birth, this set of characters is an address.

    Not only can we apply it to many more forms but also if those forms change Textract doesn’t miss a beat. That is a pretty significant change in your capability in being able to extract and digitally use data that are in documents.

  • AWS Plans to Reduce Cost of Data Retrieval With New Satellite Connection Service

    AWS Plans to Reduce Cost of Data Retrieval With New Satellite Connection Service

    Amazon Web Services is set to launch a satellite connection service that will make it faster for satellite operators to retrieve and secure their data. This heralds the company’s initial foray into space-linked hardware and will continue Amazon’s endeavors to assist the expanding industry.

    Amazon revealed its plans of building twelve satellite transmission facilities in different parts of the world during its yearly re:Invent conference. These “ground stations” are basically outposts equipped with antennas and are essential to transmitting and receiving data from the thousands of satellites currently orbiting our planet.

    AWS will reportedly allow its customers to rent access to the ground stations the same way that access to its data centers is leased. This means companies that do not typically have the financial capabilities to develop and operate their own satellite transmission base can now have access to these services on-demand.

    AWS Senior Vice President Charlie Bell explained in a statement how satellite data is vital “for building a wide range of important applications, but it is super complex and expensive to build and operate the infrastructure needed to do so.” But with AWS’ new service, customers can scale their satellite use based on their need.

    Once their clients have paid for the access, they can download, process, and save the retrieved satellite data. They can also process their data using AWS services, like for analytics, storage, processing, and streaming. This can simplify the workflow and get the information out to developers quickly.

    The new satellite service can open up more opportunities for companies to be more creative with how they leverage information or data. For instance, live concerts or satellite photos of various regions can now be beamed anywhere in the world. Satellite data can also be used to save lives, like rescuing people lost at sea or when fighting ground fires.

    The ground stations will certainly make a big difference. Right now, governments and businesses who want to utilize satellite data for communications, weather forecasting, or surface imaging purposes need to invest in expensive infrastructure to do so. But with the AWS Ground Station, the cost of data retrieval and processing will be greatly reduced since the company will handle everything.

    There’s no word yet on the exact prices AWS will demand for its services, and Amazon isn’t sharing how much it will cost them to build the satellite stations. But pricing for the use of the stations will reportedly be computed per minute of downlink time, with options to pay for blocks of time in advance set to be offered as well.

    [Featured image via Pixabay]

  • Amazon Web Services Now has a Tool for Managing ‘Secrets’

    Amazon Web Services Now has a Tool for Managing ‘Secrets’

    Even companies have secrets that must never be revealed to outsiders. These include passwords,  API keys and other credentials that could spell trouble and even cost the company money if they fall into the wrong hands.

    In this age where data breaches are a fact of life, securing company data has become even more important since businesses are now moving their systems into the cloud. In response to this need, cloud computing giant Amazon Web Services (AWS) just launched a slew of services that provide businesses with easy-to-use tools to help them secure their cloud data.

    One of these new services is the appropriately named Secrets Manager, which can be used by companies to store very important information such as passwords. AWS’s new offering is timely considering the latest round or reports saying that improperly stored passwords on the platform had been compromised by cyber attacks.

    “You never, ever again have to put a secret in your code,” Amazon CTO Werner Vogels assured audiences during the AWS Summit. Vogels added that the service “allows us to build systems that are way more secure than we could ever do in the past.”

    The Secrets Manager tool is not  AWS’s first tool geared toward enhancing cybersecurity for its clients. The company previously introduced a simpler security system which was capable of storing encryption keys and worked with dedicated hardware modules.

    This time, however, the brand new AWS Secrets Manager has a broader use. Aside from storing passwords, the tool can also be used for storing database login data as well as keys to application programming interfaces for other services.

    Along with Secrets Manager, AWS also launched the Firewall Manager. It gives clients centralized control over security policies across their entire organization and can also be used for control over multiple accounts and applications. The tool makes it easier for clients’ security teams to spot non-compliant applications and resolve issues in minutes.

    The recent tools are well-timed to address the security concerns clients might have raised in light of the recent incidents of data breaches in the cloud service. In October 2017, Accenture’s data stored by AWS was leaked and over 40,000 passwords were compromised. The Australian Broadcasting Corporation also experienced a data leak which included login information in November of last year.

    Of course, the new AWS tool isn’t free. The company charges 40 cents per secret per month as well as 5 cents per 10,000 programmatic requests.

    [Feature image via AWS website]

  • Amazon Web Services Acquires Cybersecurity Startup Sqrrl

    Amazon Web Services Acquires Cybersecurity Startup Sqrrl

    Cybersecurity will always remain a big issue that computing companies such as Amazon Web Services will have to address every time they court potential clients. After all, these clients will want assurance that their sensitive data will remain secure when stored off premises.

    With the discovery of the decades-old system flaws like Spectre and Meltdown, assuring clients on the safety of their data is even more challenging for players in the cloud computing business. However, it appears that AWS has this issue already covered. The tech giant recently acquired Sqrrl, a cybersecurity firm with ties to the master of cybersecurity itself —the NSA.

    Rumors of the deal started circulating a few months ago that Amazon was reportedly eyeing to the startup, which specializes in advanced computer threat prevention and detection. However, the acquisition has now been confirmed by Sqrrl CEO Mark Terenzoni in a post made on the company’s website.

    “We’ve reached another milestone in our journey!,” Terenzoni announced in the post. “We’re thrilled to share that Sqrrl has been acquired by Amazon. We will be joining the Amazon Web Services family, and we’re looking forward to working together on customer offerings for the future.”

    At the moment, details of the deal are not yet available to the public. However, previous reports place the deal’s price tag to be around $40 million.

    Of course, such a figure is not that much of a big deal to AWS who is still the leader in cloud computing. In the third quarter of 2017 alone, AWS posted a staggering $1.17 billion income from the $4.58 billion it generated in revenues.

    Interestingly, the Sqrrl deal comes shortly after AWS announced plans to pick up more business from the U.S. intelligence agencies. In fact, the company revealed that it will be forming a “secret” region of data centers specifically to handle the cloud computing needs of these agencies.

    Sqrrl already has ties with the NSA that date back to 2011. In 2012, it handled NSA’s open-source database software called Accumulo.

    [Featured image via Amazon Web Services]

  • Amazon Introduces Unified Auto Scaling for AWS

    Amazon Introduces Unified Auto Scaling for AWS

    One big reason why businesses find cloud computing attractive is its scalability. But Amazon Web Services (AWS) just brought scalability to a whole new level by launching a new feature called AWS Auto Scaling, which allows clients to adjust the scaling features of multiple AWS services via a single interface.

    With cloud computing’s scalability, companies no longer need to unnecessarily spend on computing hardware that they rarely use. Cloud computing providers offer clients very flexible options; they can scale up hardware capacity in times of heavy demand as well as scale down on their computing resources allocation for times when computing demand is low.

    Realizing this enormous advantage, companies usually use multiple AWS services to handle the different aspects of their operations and applications needs. Until now, adjusting the scaling of these different AWS services were done independently. However, with the new AWS Auto Scaling feature, keeping track and tweaking the scaling of their company’s different cloud computing services with AWS should now be a breeze, as explained in a post by AWS’ Chief Evangelist Jeff Barr.

    “You no longer need to set up alarms and scaling actions for each resource and each service. Instead, you simply point AWS Auto Scaling at your application and select the services and resources of interest. Then you select the desired scaling option for each one, and AWS Auto Scaling will do the rest, helping you to discover the scalable resources and then creating a scaling plan that addresses the resources of interest.”

    Of course, the end game for this auto-scaling feature is for companies to have greater control over their desired mix of availability versus cost which would ultimately determine the number of computing resources they would get from AWS.

    As can be seen above, there are three settings on the new feature such as Optimize for availability, Balance availability and cost, Optimize cost and Custom.

    The new feature is now live in the Asia Pacific, US East (Ohio), US East (Northern Virginia), EU and US West (Oregon) regions.

    [Featured Image via Amazon Web Services]

  • Microsoft Azure Cuts into AWS’s Market Share in 4th Quarter

    Microsoft Azure Cuts into AWS’s Market Share in 4th Quarter

    Microsoft’s Azure is finally gaining a foothold in the cloud computing niche with its market share jumping from 16 to 20 percent in the 4th quarter of 2017. The jump amounts to around $3.7 billion of its total revenue for the year. In contrast, Amazon Web Services (AWS) saw its market share drop from 68 to 62 percent in the same time frame.

    The latest industry figures were provided by analysts from KeyBanc, a Cleveland-based boutique investment bank specializing in mergers and acquisitions.

    With the cloud computing market still in its infancy, tech companies are busy positioning themselves to gain the upper hand in the lucrative niche. At the moment AWS is the dominant player, but this year will likely see many companies significantly expand their respective cloud computing divisions so as not to get left out of the emerging market.

    Microsoft, for one, has already made sizable investments in Azure, gearing up for the inevitable competition. The company recently added a number of data centers in the UK and other parts of the globe. Factoring in Microsoft’s efforts, KeyBanc expects the Azure platform to grow rapidly, projecting a massive 88 percent increase by the end of 2018.

    It’s fair to say Microsoft is on a buying spree in an effort to boost its market presence and clout. Recently, it acquired Avere Systems, a startup specializing in data storage solutions.

    But as expected, AWS will not take the challenge sitting down. Last November, it announced a new partnership with Cerner, a firm specializing in offering technology solutions for the healthcare industry.

    [Image by Azure/Facebook]

  • Amazon Web Services Improves AI for New Consultancy Program

    Amazon Web Services Improves AI for New Consultancy Program

    Amazon has rolled out a consultancy program with the goal of assisting consumers with cloud machine learning. The company plans to do this by connecting clients with their own experts.

    Dubbed the Amazon ML Solutions Lab, it helps clients unfamiliar with machine learning to find beneficial and efficient uses of it for their company. Amazon plans to do this by integrating brainstorming with workshops to help clients understand machine learning by cloud better. The company will also be utilizing its experts to act as advisors to clients. Together they will work through the problems the company will face and then come up with machine learning-based resolutions. Amazon’s cloud experts will also be checking in with the company weekly to see how the project progresses.

    No two solutions will be alike though, as the ML Solutions Lab will work according to the needs of the business. For instance, Amazon could send their developers on site if the client wants a more hands-on approach or clients could go to AWS’ Seattle headquarters for training.

    How long the ML Solutions Lab will work with the company will also depend on the client. But it’s expected to last anywhere from 3 to 6 months.

    Companies that have more experience with machine learning can avail of the ML Solutions Lab Express. It’s an expedited program that runs for a month and begins with a 7-day intensive bootcamp in Amazon headquarters. However, this program is only offered to companies with machine-learning quality data since it’s geared towards feature engineering and building models swiftly.

    Amazon has not shared any details yet on how much the program will cost companies. No information has been posted on its website yet and company representatives are reportedly not responding to any requests at the moment.

    Vinayak Agarwal, Amazon’s senior product manager for AWS Deep Learning, pointed out in a blog post that the company has been immersed in machine learning for the past two decades. He also added that Amazon has pioneered innovations in areas like forecasting, logistics, supply chain optimization, personalization and fraud prevention.

    Agarwal further enjoined clients to take a closer look at the Amazon recommendations and fraud prevention ML Solutions Labsaying that they will have access to the experts that developed most of the company’s machine learning-based products and services.

    The Amazon ML Solutions Lab is being offered to customers worldwide. However, the ML Solutions Lab Express is currently exclusive to US clients.

    To get started with the Amazon ML Solutions Lab, visit https://aws.amazon.com/ml-solutions-lab.

    [Featured image via Amazon Web Services]

  • Amazon Web Services Introduces Per-Second Billing to Keep Rivals at Bay

    Amazon Web Services Introduces Per-Second Billing to Keep Rivals at Bay

    Amazon is determined to maintain its lead over rivals in the cloud computing arena. Lately, the company’s cloud computing division Amazon Web Services (AWS) announced that it will be introducing a new pricing scheme by October and plans to charge clients by the second, a move seen to be financially favorable for its cloud customers.

    In a recent blog post, AWS chief evangelist Jeff Barr announced that the company would be changing its billing scheme to be more reflective of clients’ actual usage. Starting October 2, 2017, AWS will implement per-second billing for its EC2 and EBS services.

    The billing change will be applicable to all AWS regions running Linux instances. However, instances running on Microsoft Windows as well as Linux distributions with a separate hourly charge will not be affected by the new scheme.

    AWS expects that the move will be beneficial to many of its EC2 clients. However, Barr challenged companies to be more creative to take full advantage of the savings opportunities presented by the billing change.

    “While this will result in a price reduction for many workloads (and you know we love price reductions), I don’t think that’s the most important aspect of this change,” Barr explained. “I believe that this change will inspire you to innovate and to think about your compute-bound problems in new ways. How can you use it to improve your support for continuous integration?”

    Analysts are divided on what AWS’ decision to introduce per-second billing could mean to the cloud computing industry as a whole. For instance, there are speculations that it could become an industry-wide trend as it could trigger similar offerings by other players.

    Cloudreach Europe head Chris Bunch expects it to become the industry norm in the future. “Longer term the world will get used to per-millisecond billing anyway with serverless architectures, so it’s good to see this happening now, said Bunch. “I would expect other cloud companies to follow this trend.”

    However, not everyone believes the hype as some analysts voiced that it could just be a PR stunt. “It’s a PR stunt isn’t it?” quipped UKFast CEO Lawrence Jones.  “It’s trying to make something that’s very expensive sound very, very cheap.”

    Amazon first introduced the pay-as-you-go model in cloud computing usage when it launched EC2 in 2006. Back then, AWS charged clients on a per hour basis, a pricing scheme that was deemed revolutionary at that time.

    However, its rivals challenged AWS dominance by offering a more competitive pricing structure. Google, for one, introduced a more accurate per-minute billing scheme deemed more reflective of actual usage.

    [Featured Image TechRepublic]

  • AWS Outpaces Rival Cloud Platforms, Props Up Amazon’s Q2 Earnings Report

    AWS Outpaces Rival Cloud Platforms, Props Up Amazon’s Q2 Earnings Report

    Amazon Web Services’ (AWS) performance was highlighted in the recent second quarter earnings report of parent firm Amazon. In fact, it seems that the retailer’s cloud computing unit held the fort for the entire group, becoming the leading contributor to the company’s profits.

    While Amazon.com, Inc. missed its earnings estimates, AWS continued to dominate its niche, beating rivals Microsoft’s Azure and Google Cloud Platform, The Street reported. Its second quarter revenues rose by 42 percent from year-ago levels to an astounding $4.1 billion after introducing 400 new features and services and becoming the largest publicly held cloud computing provider in the process.

    AWS managed to woo a number of big corporate clients in the last 12 months, which contributed to its massive revenue increase. These include BP PLC, Ancestry.com, and the California Polytechnic State University. In addition, AWS has already entered into an agreement to provide artificial intelligence and machine learning services with Capital One Financial Corp., the American Heart Association and U.S. space agency, NASA.

    Meanwhile, parent firm Amazon’s second quarter earnings fell short of Wall Street estimates despite AWS’ massive contribution. The online retailer also warned of possible negative earnings next quarter as the company continues to allocate massive investments to ensure its future growth.

    “AWS continues to move forward on new products and win more significant enterprise business. That said, it – and public cloud more generally – is not the right answer for every organization at the moment,” was how Kate Hanaghan of IT analyst company TechMarketView explained the anticipated growth slowdown.

    Despite that, Wall Street appears comfortable with Amazon’s strategy, as the company has always shown continued profitability. Amazon share prices have climbed  40 percent since the start of 2017.

    [Featured Image by Robert Scoble/Flickr]

  • Amazon Puts the Export in AWS Import/Export Snowball

    Amazon Puts the Export in AWS Import/Export Snowball

    Last year, Amazon announced AWS Import/Export Snowball, a petabyte-scale data transport solution enabling the transfer of large amounts of data into and out of the AWS cloud.

    At the time, the ability to move data to AWS using the model was available. Now, it can be used for data export operations.

    “If you have collected, generated, or stored large terabytes or petabytes of data in AWS and need to get to it more quickly than you could via a network connection, you can now use AWS Import/Export Snowball instead,” says Amazon’s Jeff Barr.

    “You simply log in to the AWS Management Console, create an export request, and specify the data to be exported,” he adds. “A single request can span one or more Amazon Simple Storage Service (S3) buckets. The service will determine how many appliances are needed (each one can hold up to 50 terabytes) and create the export jobs accordingly. The appliances will be prepared, data will be copied to them, and they will be shipped to the address specified in the request. You can track each of these steps using the Console.

    Barr notes that if your data is stored in Amazon Glacier, you’ll need to restore it to S3 with the Lifecycle Restore feature first.

    More information about Amazon Import/Export Snowball is available here.

    Images via iStock, Amazon

  • AWS Marketplace Gets SharePoint Enterprise

    AWS Marketplace Gets SharePoint Enterprise

    Amazon announced that SharePoint Enterprise is now available to purchase and deploy in AWS Marketplace.

    Amazon’s Jeff Barr says, “Designed to suit the needs of many different types and sizes of organizations up to and including multinational enterprises, SharePoint Enterprise supports custom applications, mobile applications, social media integration, business intelligence, and advanced content management (none of which is available in the entry-level SharePoint Foundation product).”

    “This new offering helps customers to migrate their SharePoint and Windows Server workload and is made possible by the folks at Data Resolution,” he adds. “As a long-time SharePoint provider and Microsoft Hosting Provider, they have the background and experience needed to design and support a scalable implementation of SharePoint Enterprise in the AWS Cloud. They are already migrating one of their on-premises SharePoint customers to AWS, and are planning to work with us to produce a customer success story later this year. Data Resolution will also be offering consulting and migration services in order to make your transition to the cloud as smooth as possible.”

    The implementation utilizes Amazon’s AWS Marketplace Support for Clusters and AWS Resources.

    You can choose from Sharepoint Enterprise 2013 Basic, SharePoint Enterprise 2013 Business, or SHarePoint Enterprise 2013 Advanced. You can start with a free trial, and pay by the hour, by the month, or annually. You can also choose between 10, 25, and 100 user options.

    Sharepoint Enterprise can be run in the following AWS regions: US East (Northern Virginia), US West (Oregon), or US West (Northern California).

    More info here.

    Image via Amazon

  • Amazon Is Acquiring Software Company NICE

    Amazon Is Acquiring Software Company NICE

    Amazon announced that Amazon Web Services is acquiring NICE, a software and services provider for high performance and technical computing.

    NICE will operate under its existing brand and continue to work with its customers and partners ad well as develop and support its EnginFrame and Desktop Cloud Visualization products.

    “Like AWS, we are a customer obsessed company and we are globally appreciated for the excellence of our support and professional services”, said Bruno Franzini, Support and Professional Services Manager at NICE. “With the backing of AWS, we will pamper our customers even more!”

    CEO Beppe Ugolotti added, “The entire team will be with us to open this new chapter of our history. Everybody is already dreaming about all the new technologies we will be able to develop working together with our future colleagues at AWS.”

    The agreement is signed and the deal is expected to close within the quarter. Terms were not disclosed.

    Image via NICE

  • AWS IoT Is Now Generally Available

    AWS IoT Is Now Generally Available

    Amazon announced that the AWS IoT, which launched in beta a few months ago, is now generally available. This is the company’s managed cloud platform that lets connected devices interact with cloud applications and other devices.

    In a post on the AWS blog, Amazon’s Jeff Barr writes:

    We built AWS IoT because connected devices are proliferating. They are in your house, your car, your office, your school, and perhaps even in your body! Like some of our more advanced customers, we have been building systems around connected devices for quite some time. Our experience with Amazon Robotics, drones (Amazon Prime Air), the Amazon Echo, the Dash Button, and multiple generations of Kindles has given us a well-informed perspective on how to serve this really important emerging market. Behind the scenes, AWS services such as AWS Lambda, Amazon API Gateway, Amazon DynamoDB, Amazon Kinesis, Amazon Simple Storage Service (S3), and Amazon Redshift provide the responsive, highly scalable infrastructure needed to build a robust IoT application.

    When we talked to our customers and to our own engineers, we learned quite a bit about the pain points that add complexity and development time to IoT applications. They told us that connecting devices to the cloud is overly complex due to the variety of SDKs and protocols that they need to support in a secure and scalable fashion. Making this even more difficult is the fact that many devices “feature” intermittent connectivity to the Internet, even as application logic shifts from the device to the cloud. Finally, the sheer volume of data generated by the sensors attached to the devices mandates a Big Data approach to storage, analytics, and visualization.

    In the post, Barr talks about how the Philips HealthSuite platform and Scout Alarm are using AWS IoT. Uses cases Amazon highlights include: agriculture, cars/trucks, consumer devices, gaming, home automation, logistics, medical, municipal infrastructure, oil/gas, and robotics.

    You can find the documentation here.

    Image via Amazon

  • Amazon Web Services Adds New APIs For Testing Access Control Policies

    Amazon Web Services announced two new APIs for AWS Identity and Access Management (IAM) that lets you automate validation and auditing of permissions for IAM users, groups, and roles. They let you call the IAM policy simulator with the AWS CLI or any AWS SDK.

    The new iam:SimulatePrincipalPolicy API lets you programmatically test existing IAM policies to verify that policies work properly and identify specific statements in a policy that grant or deny access to specific resources or actions.

    Amazon explains:

    Simulate the set of IAM policies attached to an IAM entity against a list of API actions and AWS resources to determine the policies’ effective permissions. The entity can be an IAM user, group, or role. If you specify a user, then the simulation also includes all of the policies attached to groups that the user is a member of.

    You can optionally include a list of one or more additional policies specified as strings to include in the simulation. If you want to simulate only policies specified as strings, use SimulateCustomPolicy instead.

    The simulation does not perform the API actions, it only checks the authorization to determine if the simulated policies allow or deny the actions.

    The iam:SimulateCustomPolicy API will let you test the effects of new and/or updated policies that aren’t attached to users, groups, or roles.

    Brigid Johnson from Amazon Web Services walks you through utilizing the APIs in a blog post here. You can find documentation here and further discussion in a forum here.

    Image via Amazon

  • AWS Device Farm Is About To Get iOS App Support

    AWS Device Farm Is About To Get iOS App Support

    Earlier this month, Amazon Web Services announced the launch of AWS Device Farm, a way to automate and scale app testing on actual mobile devices. It was only announced for Android and the Android-based Fire OS, however.

    On Wednesday, Amazon announced expansion of the offering for iOS apps. While it’s not quite here yet, it will be very soon. The company plans to launch it on August 4 with support for Appium (Java JUnit and Java TestNG), Calabash, UI Automation, and XCTest.

    “You can also use the fuzz test that is built in to Device Farm. This test randomly sends user interface events to devices and reports on the results,” says Amazon’s Jeff Bar in a blog post announcing the news.

    After you upload your binary to Device Farm, you will have the opportunity to select the app to test,” he explains. “After you start the test, the test results and the associated screen shots will be displayed as they arrive.”

    Developers will be able to test cross-platform titles and get various reports on problem patterns, logs, screenshots, performance data, etc. They should be consistent regardless of platform and test framework.

    While the offering won’t be available until August 4, Amazon is suggesting you get started reading the documentation and creating test suites/scripts.

    image via Amazon

  • Amazon Web Services Launches API Gateway, Device Farm

    Amazon Web Services Launches API Gateway, Device Farm

    Amazon just announced Amazon API Gateway, which is aimed at making it easier to build and run “reliable, secure” APIs at any scale. It’s a fully managed service that lets AWS customers create, publish, maintain, monitor, and secure APIs.

    “With a few clicks in the AWS Management Console, customers can create an API that acts as a ‘front door’ for applications to access data, business logic, or functionality from their ‘back-end’ services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), or code running on AWS Lambda. Amazon API Gateway handles all of the tasks associated with accepting and processing billions of daily API calls, including traffic management, authorization and access control, monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs, and developers pay only for the API calls they receive and the amount of data transferred out,” Amazon explains.

    The product lets customers to use AWS security tools they’re already familiar with like AWS Identity and Access Management (IAM) to verify and authenticate API requests. It also lets them run multiple versions of an API at the same time in order to develop and test other versions of APIs without affecting existing apps.

    You can get a closer look here and check out this FAQ page for additional info.

    The company also announced AWS Device Farm, which lets developers automate and scale Android and Fire OS app testing on actual mobile devices.

    Screen shot 2015-07-09 at 2.30.34 PM

    “Today, to test mobile apps, developers most often rely on manual testing of their apps,” Amazon says in an announcement. “They use emulators that try to simulate the behavior of real devices, or they rely on their own collection of local devices that only cover a small set of the overall device market. Developers also have to address variations in firmware and operating systems, maintain operation with intermittent network connectivity, integrate reliably with back-end services, and ensure compatibility with other apps running on the device. Now, AWS Device Farm gives developers access to a fleet of devices that includes all the latest hardware, operating systems, and platforms so they can instantly test their apps across a large selection of Android and Fire devices, and integrate these tests into their continuous deployment cycle. AWS Device Farm removes the complexity and expense of designing, deploying, and operating device farms and automation infrastructure so that developers can focus on delivering the best app experience to their customers. Developers simply upload their Android or Fire OS application and select from a catalog of devices. Then, developers can configure AWS Device Farm’s built-in test suite to verify functionality with no scripting required, or they can choose from a range of popular, open-source test frameworks like Appium, Calabash, and Espresso.”

    AWS Device Farm will be available on July 13. More here.

    Images via Amazon

  • Amazon Web Services Adds Support For Budgets And Forecasts

    Amazon Web Services Adds Support For Budgets And Forecasts

    Amazon announced the launch of support for budgets and forecasts in Amazon Web Services a year after launching the Cost Explorer, which is integrated with the AWS Billing Console.

    The Cost Explorer gives users reporting, analytics, and visualization tools. The new budgets and forecasts support lets users define and track budgets for AWS costs and forecast AWS costs for up to three months out.

    It also now provides the ability to get email notifications when actual costs exceed or are forecast to exceed budget costs.

    “Budgeting and forecasting takes place on a fine-grained basis, with filtering or customization based on Availability Zone, Linked Account, API operation, Purchase Option (e.g. Reserved), Service, and Tag,” says Amazon’s Jeff Barr in a blog post. “The operations provided by these new tools replace the tedious and time-consuming manual calculations that many of our customers (both large and small) have been performing as part of their cost management and budgeting process.”

    Amazon ran a private beta of the new features with over a dozen customers, and says that after doing so, it believes the tools will help customers do a better job of managing costs.

    With the budgets support, users can set monthly budgets around AWS costs and customize them by multiple dimensions (including tags). The AWS Management Console will list each budget, and you can filter them by name.

    budgets

    “You can set alarms that will trigger based on actual or forecast costs, with email notification to a designated individual or group,” explains Barr. “These alarms make use of Amazon CloudWatch but are somewhat more abstract in order to better meet the needs of your business and accounting folks. You can create multiple alarms for each budget. Perhaps you want one alarm to trigger when actual costs exceed 80% of budget costs and another when forecast costs exceed budgeted costs. You can also view variances (budgeted vs. actual) in the console.”

    For forecasts, AWS will use the same algorithm that many of its teams use to predict demand for their own offerings. It will give you cost estimates that include 80% and 95% confidence interval ranges. You can filter forecasts by various dimensions.

    Images via Amazon

  • Amazon Web Services Launches New EC2 Instances

    Amazon announced the launch of new M4 instances for Amazon Elastic Compute Cloud (EC2). These are the next generation of general purpose instances, and make use of custom 2.4 GHz Intel Xeon E5-2676 v3 Haswell processors, and offer dedicated bandwidth to Amazon Elastic Block Store (Amazon EBS).

    The instances provide Enhanced Networking for higher packet per second performance, lower network jitter, and lower network latencies, according to the company. They also deliver up to four times the packet rate of instances without Enhanced Networking.

    “Within Placement Groups, Enhanced Networking reduces average latencies between instances by 50 percent or more,” Amazon says. “M4 instances are well-suited for a wide variety of applications including relational and in-memory databases, gaming servers, caching fleets, batch processing, and business applications like SAP and Microsoft SharePoint.”

    “Amazon EC2 provides a comprehensive selection of instances to support virtually any workload, and we continue to deliver new technologies and high performance in our current generation instances,” says Matt Garman, Vice President, Amazon EC2, AWS. “M4 instances bring new capabilities to the General Purpose family through the use of a custom Intel Haswell processor and larger instance sizes. We are also pleased to deliver even better network performance with dedicated bandwidth to Amazon EBS and Enhanced Networking, an Amazon EC2 feature that we are providing, for the first time, to General Purpose Instances. With these capabilities, M4 is one of our most powerful instance types and a terrific choice for workloads requiring a balance of compute, memory, and network resources.”

    You can launch M4 instances using AWS Management Console, AWS Command Line Interface (CLI), AWS SDKs, and third-party libraries. There are five different sizes:

    More details on the AWS blog.

    In other AWS news, Redshift is getting more cost-effective.

    Image via Amazon