WebProNews

Tag: database

  • Sophos Suffers Data Exposure Incident

    Sophos Suffers Data Exposure Incident

    Security firm Sophos has informed customers it suffered a data breach as a result of a misconfigured database.

    According to ZDNet, customers’ personal information was exposed, including names, emails and phone numbers. The company informed impacted customers via email, which ZDNet got a copy of.

    On November 24, 2020, Sophos was advised of an access permission issue in a tool used to store information on customers who have contacted Sophos Support.

    The company confirmed the breach to ZDNet, saying that only a “small subset” of its customers were impacted. Nonetheless, this is the second major security issue this year for Sophos, a major source of embarrassment for a company in the business of providing computer security to its customers.

    The company tried to assure customers it was doing everything it could to address the issue.

    At Sophos, customer privacy and security are always our top priority. We are contacting all affected customers,” the company said. “Additionally, we are implementing additional measures to ensure access permission settings are continuously secure.

  • Financial Network, Inc. Leaves Oracle In Favor Of MariaDB SkySQL

    Financial Network, Inc. Leaves Oracle In Favor Of MariaDB SkySQL

    Financial services firm Financial Network, Inc. (FNI) is leaving Oracle’s platform in favor of MariaDB SkySQL.

    MariaDB was forked from MySQL when Oracle acquired the database engine in 2009. Developers were concerned about the future of MySQL under Oracle, and wanted a version of the database that would remain independent of Oracle, while at the same time maintaining full compatibility.

    MariaDB Corporation pairs the database with SkySQL for “the first and only database-as-a-service (DBaaS) to bring the full power of MariaDB Platform to the cloud, combining powerful enterprise features and world-class support with unrivaled ease of use and groundbreaking innovation.”

    SkySQL is offered as a DBaaS on Google Cloud Platform, and MariaDB is used by Google, Mozilla, Deutsche Bank, DBS Bank, Nasdaq, Red Hat, ServiceNow, Verizon and Walgreens. Now, FNI is leaving Oracle in favor of MariaDB and SkySQL.

    “MariaDB has been a true collaborative partner for us in our journey to the cloud,” said Bryan Bancroft, lead database administrator at FNI. “With SkySQL, we don’t have to bother with containers or managing the database, that’s left to the database professionals at MariaDB. We also have the option of easily expanding our applications to leverage blended transactions and analytics when the time is right. Moving to MariaDB from Oracle was a key strategic business decision for us and has ultimately saved us up to 80% in database costs – allowing us to reinvest the savings into delivering new, critical solutions for our customers.”

    The announcement is a big win for MariaDB and a loss for Oracle, just as the company is doubling down in an effort to take on its bigger cloud rivals.

  • Microsoft Error Exposes Database Containing 250 Million Service Records

    Microsoft Error Exposes Database Containing 250 Million Service Records

    Microsoft has announced in a blog post that a database containing 250 million service records was left exposed due to a configuration error.

    According to security firm Comparitech, a “security research team led by Bob Diachenko uncovered five Elasticsearch servers, each of which contained an apparently identical set of the 250 million records. Diachenko immediately notified Microsoft upon discovering the exposed data, and Microsoft took swift action to secure it.”

    Diachenko is a well-known cybersecurity professional Comparitech collaborates with. Diachenko praised Microsoft’s quick response to his findings.

    “I immediately reported this to Microsoft and within 24 hours all servers were secured. I applaud the MS support team for responsiveness and quick turnaround on this despite New Year’s Eve.”

    Microsoft’s own investigation continued, leading to the blog post today detailing what went wrong.

    “Our investigation has determined that a change made to the database’s network security group on December 5, 2019 contained misconfigured security rules that enabled exposure of the data.”

    The company said that the vast majority of data had already been cleared of any identifiable personal information, although there was some data meeting specific criteria that may not have been redacted.

    “As part of Microsoft’s standard operating procedures, data stored in the support case analytics database is redacted using automated tools to remove personal information. Our investigation confirmed that the vast majority of records were cleared of personal information in accordance with our standard practices. In some scenarios, the data may have remained unredacted if it met specific conditions.”

    Most importantly, the company says it has found no evidence of any malicious use of the exposed database.

  • Oracle Announces Additional Hiring to Boost Cloud Services

    Oracle Announces Additional Hiring to Boost Cloud Services

    Oracle is boosting its cloud efforts with an announcement that it is hiring some 2,000 new employees. While Amazon, Microsoft and, to some extent, Google have dominated the cloud market, Oracle sees ongoing opportunity to expand.

    Oracle has been making moves to take on the leaders, including opening offices in Microsoft’s back yard. Even more surprising, earlier this year Oracle announced a cloud partnership with Microsoft, working to ensure their products work seamlessly across each other’s cloud platforms.

    As Oracle continues its cloud expansion, the company is counting on the relative infancy of the market, along with the overall lack of penetration. In addition, Oracle is uniquely positioned to deliver the entire range of cloud service.

    “Cloud is still in its early days with less than 20 percent penetration today, and enterprises are just beginning to use cloud for mission-critical workloads,” said Don Johnson, executive vice president, Oracle Cloud Infrastructure. “Our aggressive hiring and growth plans are mapped to meet the needs of our customers, providing them reliability, high performance, and robust security as they continue to move to the cloud.”

    Oracle currently operates 16 cloud regions globally, with 12 of those being added in the past year. The company plans to add an additional 20 regions by the end of 2020, no doubt with the help of the 2,000 additional hires. By focusing on adding more regions, Oracle stands to gain strong footholds in regional and niche markets that the Big Three haven’t wrapped up.

    “Eleven countries or jurisdictions will have region pairs that facilitate enterprise-class, multi-region, disaster-recovery strategies to better support those customers who want to store their data in-country or in-region.

    “Today, Oracle is the only company delivering a complete and integrated set of cloud services and building intelligence into every layer of the cloud. Oracle Cloud Infrastructure’s growing talent base will ensure customers continue to benefit from best-in-class security, consistent high performance, simple predictable pricing, and the tools and expertise needed to bring enterprise workloads to cloud quickly and efficiently.

    “In addition to rapid hiring, Oracle will make additional real estate investments to support the expanded Oracle Cloud Infrastructure workforce.”

  • SAP CEO: It’s All About the Customer Experience

    SAP CEO: It’s All About the Customer Experience

    SAP CEO Bill McDermott, in a wide-ranging interview with Bloomberg talked about enterprises moving to the cloud, competing with Oracle’s new autonomous database, competing with Salesforce, and its huge business in China:

    SAP Has Taken Over the Enterprise Database Market

    Do you have a major move to the cloud? If legacy companies haven’t fully invested themselves in the cloud where they’ve converted their revenue streams more to cloud than on-premise I think you will see them make bold moves to get cloud-ready. No choice, that’s where the customer wants us.

    We obviously have taken over the enterprise database market with HANA. HANA has many of the characteristics that you mentioned (referencing Oracle). HANA can take data from any source, everything that is either structured or unstructured and data from any source in the enterprise. HANA is running the biggest enterprises in the world now with 25,000 customers at mass scale. We like our HANA database very much.

    It’s All About the Customer Experience

    We see a fourth-generation of CRM where we go beyond the current market participants. Basically, they focus on sales, marketing campaigns, things that essentially take money out of the customers pocket. What we want to do is focus on an omnichannel ecommerce world where we connect the demand chain because our customers are social, mobile and on the run. They shop in every channel, direct to consumer, wholesale, retail. We want to connect that demand chain to the supply chain so that we have a complete end-to-end business.

    Why is this so important? We are not just talking about CRM, we are talking about customer experience. The way CEOs think about their brand, their products, their human capital, their customers. All of the people inside of the company have to be completely committed to the customers outside the company. This is what we call fourth-generation CRM. It’s all about the customer experience.

    We’d Like to See China and the US Cooperate

    The most important thing is that we get paid to run businesses and work in an environment where we let government do what government does. All government leaders have to do what’s best for their country and best for their constituents. These tariffs are obviously a serious situation. You have the two largest economies in the world with $30 trillion in combined economic firepower that right now are at a little bit at odds with each other.

    It’s good, as we saw in today’s tweet, it was stated that at the G20 President Xi and President Trump will sit down and talk. That’s very encouraging to the market. Markets like certainty. So certainly we would like to see China and the US cooperate. It’s good for supply chain, it’s good for business.

    China is Regarded as SAP’s Second Home

    Germain engineering is highly regarding in China, as it is in the United States and around the world, but we do particularly well in China. China is our fastest growing market. We think that China is easily regarded as SAP’s second home in terms of market receptivity, ecosystem growth in China, and our long-term prospects. We think China will end up being the biggest market in the world soon.

    We have the most sophisticated data privacy in the world. We acquired a company called Gigya where we have billions and billions of customer records. We protect your privacy, we don’t let customers actually engage you unless you agree that you want to opt-in on various offerings from our customers and they serve their customers. We follow the same reference architecture, the same high-security standards and cloud standards in China that we do in Europe, the United States, and every other theater in the world.

    We are very confident in China in the way enterprises can serve their customers in China with high-security standards. We recently announced a very important partnership with Alibaba and that is a cloud partnership that will not only impact our growth in one of the fastest growing regions in the world.

    We Are Very Diverse and Highly Inclusive

    We actually have appointed in the last 12 months two women to our Executive Board, not just because they are women, but because they are great leaders. That would be Adaire Fox-Martin and Jennifer Morgan. If you look at our company we have a third of our workforce that is female and we also have a third of our leaders that are female.

    We are very diverse and highly inclusive. One of the things we really enjoy is what we have done with Autism at Work and now we have dedicated one percent of our hiring to autistic folks, at least on the spectrum somewhere, to help our workforce be highly productive and diverse. That extends also to the solutions that we have. If you look at success factors, the number one human capital solution in the world, we have a business without bias mentality.

    Computers don’t have bias. In the way we build the algorithms in the software they eliminate bias from the hiring process. The computer doesn’t have a bias. It looks for the best candidates and it fills an algorithm or model that the company is trying to get at. If you want 40 percent of your workforce to be diverse and inclusive, the model is built to do that for you. You don’t leave it up to humans, you let the software do the work and then the human judgment comes in at the final phase of hiring. It’s changing companies everywhere.

  • Google Creates a Technical Guide for Moving to the Cloud

    Google Creates a Technical Guide for Moving to the Cloud

    Google has created a guide in the form of a website for companies that are considering a move to their cloud called Google Cloud Platform for Data Center Professionals.

    “We recognize that a migration of any size can be a challenging project, so today we’re happy to announce the first part of a new resource to help our customers as they migrate,” said Peter-Mark Verwoerd,a Solutions Architect at Google who previously worked for Amazon Web Services. “This is a guide for customers who are looking to move to Google Cloud Platform (GCP) and are coming from non-cloud environments.”

    The guide focuses on the basics of running IT — Compute, Networking, Storage, and Management. “We’ve tried to write this from the point of view of someone with minimal cloud experience, so we hope you find this guide a useful starting point,” said Verwoerd.

  • Amazon Kinesis Analytics Announced for AWS Developers

    Amazon Kinesis Analytics Announced for AWS Developers

    Amazon Web Services (AWS) is making available Amazon Kinesis Analytics, enabling continuous querying of streaming data using standard SQL. This allows developers to create SQL queries on live and continuous data for real-time analysis. No new programming skills are needed.

    “AWS’s functionality across big data stores, data warehousing, distributed analytics, real-time streaming, machine learning, and business intelligence allows our customers to readily extract and deploy insights from the significant amount of data they’re storing in AWS,” said Roger Barga, General Manager, Amazon Kinesis. “With the addition of Amazon Kinesis Analytics, we’ve expanded what’s already the broadest portfolio of analytics services available and made it easy to use SQL to do analytics on real-time streaming data so that customers can deliver actionable insights to their business faster than ever before.”

    Screen Shot 2016-08-15 at 10.53.28 AM

    Amazon Kinesis Analytics processes streaming data with less than 1-second processing latencies, enabling you to analyze and respond in real time. According to Amazon it provides built-in functions that are optimized for stream processing, like anomaly detection and top-K analysis, so that you can easily perform advanced analytics.

    “You can now run continuous SQL queries against your streaming data, filtering, transforming, and summarizing the data as it arrives,” said AWS Chief Evangelist Jeff Barr. “You can focus on processing the data and extracting business value from it instead of wasting your time on infrastructure. You can build a powerful, end-to-end stream processing pipeline in 5 minutes without having to write anything more complex than a SQL query.”

    “When I think of running a series of SQL queries against a database table, I generally think of the data as staying more or less static while the queries come and go pretty quickly,” Barr explained. “Rows are added, changed, and deleted all the time, but this does not generally matter when considering a single query that runs at a particular point in time. Running a Kinesis Analytics query against streaming data turns this model sideways. The queries are long-running and the data changes many times per second as new records, observations, or log entries arrive. Once you wrap your head around this, you will see that the query processing model is very easy to understand: You build persistent queries that process records as they arrive.”

    AWS customers can employ Amazon Kinesis Analytics in minutes by going to the AWS Management Console and selecting a Kinesis Streams or Kinesis Firehose data stream. Amazon says that Kinesis Analytics takes care of everything required to continuously query streaming data, automatically scaling to match the volume and throughput rate of incoming data while delivering sub-second processing latencies.

  • AWS Database Migration Service Available to All

    AWS Database Migration Service Available to All

    Amazon announced that over 1,000 databases have migrated to Amazon Web Services since January 1, and that the AWS Database Migration Service is now available to all customers.

    AWS Database Migration Service is a managed service that lets customers migrate Oracle, SQL Server, MySQL, MariaDB, and PostgreSQL databases from on-premises datacenters to AWS with “virtually no” downtime.

    Of the 1,000+ databases migrated, Amazon says many also used the AWS Schema Conversion Tool to switch database engines.

    “Customers migrating their databases to the cloud have faced a difficult choice: either take their database out of service while they copy the data (losing revenue and traffic in the process), or purchase migration tools that typically cost hundreds of thousands of dollars,” Amazon said. “The AWS Database Migration Service solves this problem by reducing the complexity, cost, and downtime of database migration, making it possible for customers to migrate terabyte-sized on-premises Oracle, SQL Server, and open source databases to the Amazon Relational Database Service (Amazon RDS) or to a database running on Amazon Elastic Compute Cloud (Amazon EC2) for as little as $3/TB and with virtually no downtime.”

    “Hundreds of customers moved more than a thousand of their on-premises databases to Amazon Aurora, other Amazon RDS engines, or databases running on Amazon EC2 during the preview of the AWS Database Migration Service,” said Hal Berenson, Vice President, Relational Database Services for AWS. “Customers repeatedly told us they wanted help moving their on-premises databases to AWS, and also moving to more open database engine options, but the response to the AWS Database Migration Service has been even stronger than we expected. In the preview, one-third of the database migrations used the AWS Database Migration Service to not only move databases to the AWS Cloud, but also to switch database engines in the process.”

    With the migration service, customers pay an hourly fee, and according to the company, set-up only takes ten minutes.

    The service can be accessed from the AWS Management Console. It’s available in the US East (N. Virginia), US West (Oregon), US West (N. California), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) regions now. In the coming months, it will be available in additional regions.

    Image via Amazon

  • Microsoft Extends SQL Server to Linux

    Microsoft Extends SQL Server to Linux

    Microsoft announced that it will bring SQL Server to Linux, enabling a consistent data platform across Windows Server and Linux.

    For now, the core relational database capabilities are in preview (starting immediately) with availability coming mid-2017.

    “SQL Server on Linux will provide customers with even more flexibility in their data solution,” said Scott Guthrie, Executive Vice President, Cloud and Enterprise Group at Microsoft. “One with mission-critical performance, industry-leading TCO, best-in-class security, and hybrid cloud innovations – like Stretch Database which lets customers access their data on-premises and in the cloud whenever they want at low cost – all built in.”

    “SQL Server’s proven enterprise experience and capabilities offer a valuable asset to enterprise Linux customers around the world,” said Paul Cormier, President, Products and Technologies at Red Hat. “We believe our customers will welcome this news and are happy to see Microsoft further increasing its investment in Linux. As we build upon our deep hybrid cloud partnership, spanning not only Linux, but also middleware, and PaaS, we’re excited to now extend that collaboration to SQL Server on Red Hat Enterprise Linux, bringing enterprise customers increased database choice.”

    Microsoft also announced some other improvements to SQL Server including security encryption capabilities, in-memory database support and performance increases of up to 30-100x, improved data warehousing, new BI support for iOS, Android, and Windows Phone, and advanced analytics.

    Guthrie discusses all of this and the coming Linux support here.

    General availability of SQL Server 2016 will come later this year.

    Image via Microsoft

  • Facebook Is Shutting Down Parse

    Facebook Is Shutting Down Parse

    Back in 2013, Facebook acquired Parse, a cloud-based platform for cross-platform apps, which let developers create rich social apps integrated with Facebook across platforms like iOS, Android, HTML5, etc.

    The company announced late on Thursday that it is shutting Parse down. They’re beginning the wind-down process and expect to have it fully retired on January 28, 2017.

    “We’re proud that we’ve been able to help so many of you build great mobile apps, but we need to focus our resources elsewhere,” says Parse co-founder Kevin Lacker. “We understand that this won’t be an easy transition, and we’re working hard to make this process as easy as possible. We are committed to maintaining the backend service during the sunset period, and are providing several tools to help migrate applications to other services.”

    They released a database migration tool to let developers migrate data from their Parse apps to any MongoDB database.

    “During this migration, the Parse API will continue to operate as usual based on your new database, so this can happen without downtime,” says Lacker. “Second, we’re releasing the open source Parse Server, which lets you run most of the Parse API from your own Node.js server. Once you have your data in your own database, Parse Server lets you keep your application running without major changes in the client-side code.”

    You can find a migration guide here.

    Image via Parse

  • ScaleArc Announces ACID-Compliant Caching Mechanism For MySQL, SQL Server, Oracle

    ScaleArc Announces ACID-Compliant Caching Mechanism For MySQL, SQL Server, Oracle

    Database load balancing software provider ScaleArt announced support for automatic cache invalidation, which it calls the world’s first ACID-compliant caching mechanism for dynamic data.

    According to the company, the feature increases website and app performance by making it safe to cache data that frequently changes. This goes for things like shopping cart and user profile information for ecommerce apps.

    The software tracks data changes to invalidate a cache entry so outdated data isn’t served. The company says it also provides the ability to handle more workload, reducing page download speeds and increasing site performance while protecting data.

    “ScaleArc’s auto cache invalidation capability was recently put to the test through an extensive evaluation program conducted by a leading eCommerce company,” said ScaleArc. “The company tested the feature across two query sizes, measuring the query-per-second (QPS) rate and response time both with and without ScaleArc’s database load balancing software. Throughout the testing, the company observed that response time with ScaleArc’s software increased 6x to 12x, depending on the query response size.”

    “For any company conducting business online, poor website or application performance can result in users failing to complete a transaction or abandoning the eCommerce site all together,” added CEO Justin Barney. “With ScaleArc’s database load balancing software and automatic cache invalidation, companies can now cache data that was previously believed to be too risky to cache. By now making this data safe to cache, ScaleArc can bolster business for companies with dynamic data, by reducing their page download times and increasing their overall site availability.”

    In addition to shopping cart data and user profile data, ScaleArc says Financial data is a predominant use case for auto cache invalidation.

    The load balancing software is available for SQL Server, MySQL, and Oracle. More here.

    Image via ScaleArc

  • Facebook, Google, Twitter & LinkedIn Team Up On WebScaleSQL

    Facebook, Google, Twitter & LinkedIn Team Up On WebScaleSQL

    Facebook, Google, Twitter and LinkedIn announced a new collaboration among their engineer teams called WebScaleSQL.

    A spokesperson for Facebook tells us that the companies are “working to share a common set of changes to the upstream MySQL branch that will be available via open source,” and “will include contributions from MySQL engineering teams at all four companies.”

    WebScaleSQL will expand on existing efforts by the MySQL community, and we will continue to track the upstream branch that is the latest, production-ready release (currently MySQL 5.6), Facebook says.

    So far, the engineers have set up a system for collaborating, reviewing code, and reporting bugs. One engineer can propose a change, and another from another company will review the code and offer feedback. If an agreement is reached, it will be pushed to the WebScaleSQL branch for everybody else. Each company can then further customize WebScaleSQL for their own needs.

    The engineers have already made an automated framework that will run and publish the results of MySQL’s built-in test system, a suite of stress tests and a prototype automated performance testing system. They’ve also made changes to code structure and existing tests as well as performance improvements, and features to make WebScaleSQL scaling easier.

    You can read more about the specifics here.

    The companies intend to keep their WebScaleSQL work open and to continue to follow the most up-to-date upstream version of MySQL.

    “As long as the MySQL community releases continue, we are committed to remaining a branch – and not a fork – of MySQL,” says Facebook software engineer Steaphan Greene.

    Those who want to get involved with the project can check out this site.

    Image via WebScaleSQL.org

  • IBM Acquires Cloud Database Company Cloudant

    IBM this week announced that it has officially completed its acquisition of Cloudant, a formerly private cloud database company based in Boston. The purchase price and other details of the transaction have not been disclosed.

    Cloudant provides database-as-a-service to developers and enterprise that need scalable database solutions implemented quickly. The company currently provides its services to clients that include mobile app developers, retail businesses, and companies in the financial services industry.

    Cloudant is also a contributor to the Apache CouchDB open source community. The company provides a JSON-based cloud service for web and mobile developers.

    “Our decision to join IBM marks a clear shift in the way modern software is built,” said Derek Schoettle, CEO of Cloudant. “A new generation of developers has grown up coding against web frameworks and cloud infrastructure. When Cloudant launched in 2010, we knew this next wave of innovation would be a core market for our service. Now in 2014, we’re seeing web development transition to the enterprise, and, as part of IBM, we couldn’t be in a better spot.”

    IBM will be folding Cloudant into its Information and Analytics Group. The company will integrate Cloudant technology into its big data and analytics, mobile, and cloud computing initiatives. Cloudant tech is already being used in IBM’s MobileFirst solutions and will soon be available through the company’s BlueMix platform.

    “With the acquisition of Cloudant, IBM is helping to fuel a new era of next generation mobile and web apps built on the cloud,” said Sean Poulley, VP for Databases & Data Warehousing at IBM. “Boosting IBM’s big data and analytics, cloud computing and mobile offerings, Cloudant’s open, cloud database service will bring entirely new levels of simplicity and scalability to developers.”

    Image via Cloudant

  • Google Launches General Availability Of Cloud SQL

    Google released Cloud SQL in limited preview back in 2011, and has only just now launched general availability.

    It was first conceived of as an add-on to Google App Engine, but Google has since launched Compute Engine, and it works as a “database backbone” for apps running on either.

    With general availability, Cloud SQL gets encryption of customer data, a 99.95% uptime SLA, and support for databases up to 500GB in size.

    Cloud SQL instances can store that amount, with the smallest D0 instance costing $0.025 per hour up, and on the other end of the spectrum, D32 with 16GB of RAM costing $46.84 per day.

    “Your data is replicated multiple times in multiple zones and automatically backed up, all included in the price of the service,” says Google Cloud product manager Joe Faith. “And you only pay for the storage that you actually use, so you don’t need to reserve this storage in advance.”

    “Replicated storage means we can guarantee 99.95% availability of the service,” he adds. “And because even a reduced service is not acceptable for many applications, we have set a high bar for availability: for example, we regard a single minute of just 20% connection failure as a downtime.”

    Google says Cloud SQL has already seen “great” developer traction with customers including Costco, LiveHive, Ocado and LiveStream.

    Last year, Google launched the Cloud SQL API.

    Image via Google

  • Get Ready For Lots Of Microsoft Enterprise Releases This Fall

    Microsoft has announced a bunch of new enterprise cloud solutions coming this fall.

    On October 18th, the company will release Windows Server 2012 R2 and System Center 2012 R2 for businesses to create data centers, as well as Visual Studio 2013 and the new .NET 4.5.1 for app creation.

    On November 1st, the company will start offering Enterprise Agreement customers access to discounted Windows Azure prices.

    The company announced a strategic partnership with Equinix to provide customers with more cloud connection options. This follows similar previously announced partnerships with AT&T and others. Customers will be able to connect their networks with Windows Azure at Equinix exchange locations.

    Microsoft introduced Windows Azure US Government Cloud for government customers, and Windows Azure has been granted “FedRAMP Joint Authorization Board Provisional Authority to Operate,” which the company says makes it the first public cloud of its kind to achieve this level of government authorization.

    Next week, the company will release a second preview of SQL Server 2014 with increased performance improvements. Later this month, Microsoft will release Windows Azure HDInsight Service, an Apache Hadoop-based service that works with SQL Server for big data analytics.

    On October 18th, Microsoft will release Windows Intune aimed at helping IT departments give mobile employees secure access to apps and data.Later this month, they’re also launching a Microsoft Remote Desktop app for Windows Server 2012 R2.

    Also coming later this month is Microsoft Dynamics CRM Online Fall ’13. Microsoft Dynamics NAV 2013 R2 is now available.

    “As enterprises move to the cloud they are going to bet on vendors that have best-in-class software as a service applications, operate a global public cloud that supports a broad ecosystem of third party services, and deliver multi-cloud mobility through true hybrid solutions,” said Satya Nadella, Microsoft’s Cloud and Enterprise executive vice president. “If you look across the vendor landscape, you can see that only Microsoft is truly delivering in all of those areas.”

    More details about all these releases here.

    Image: Microsoft

  • DataStax Raises New $45 Million Round Of Funding

    Datastax announced this week that it has completed a $45 million series D round of funding led by Scale Venture Partners with participation from existing investors Lightspeed Venture Partners, Crosslink Capital and Meritech Capital Partners, and new investors DFJ Growth and Next World Capital.

    The company says the funding builds on its ongoing customer momentum, which includes 20 companies in the Fortune 100, as well as “dozens” of enterprise migrating from Oracle to Cassandra-based NoSQL.

    DataStax says it will use the money to further invest in international expansion, channel growth and product development.

    “The evolution of enterprise applications and rise of big data has eclipsed traditional database capabilities and provides an opening for a significant new market entrant,” said Andy Vitus, partner, Scale Venture Partners. “DataStax is poised to disrupt the traditional RDBMS market and has already demonstrated significant momentum – signing an enviable list of enterprise customers, expanding into Europe, and unveiling innovative releases that make the product easier to adopt, deploy, and manage. We look forward to working with the team to further accelerate their expansion as they address this large and growing market.”

    “Our Cassandra-based platform is far and away the best solution for powering online applications that must remain available at all times and scale to tremendous levels,” said Billy Bosworth, CEO, DataStax. “Today’s funding exceeds all the capital we’ve received to date, and we will use this investment to accelerate our international expansion, channel growth, sales and marketing and product development while increasing our support for the open-source Cassandra community.”

    The company’s customers currently include Netflix, eBay, Adobe, Constant Contact, and Ooyala.

    DataStax also unveiled new enterprise software this week: DataStax Enterprise (DSE) 3.1. More on that here.

  • Teradata Announces Teradata Intelligent Memory

    Teradata has introduced a new database technology called Teradata Intelligent Memory, which the company says “creates the industry’s first extended memory space beyond cache that significantly increases query performance and enables organizations to leverage in-memory technologies with big, diverse data.”

    In other words, Teradata says, it’s the first in-memory technology that supports big data deployments.

    The product is part of the “Unified Data Architecture” strategy, which leverages Teradata, Teradata Aster, and open source Apache Hadoop. As the company notes, data in Apache Hadoop that is frequently used can be accessed through Teradata SQL-H, and based on temperature of the data, moved to Intelligent Memory to take advantage of its computing capability.

    “The introduction of Teradata Intelligent Memory allows our customers to exploit the performance of memory within Teradata Platforms in a new way, and extends our leadership position as the best performing data warehouse technology at the most competitive price,” said Scott Gnau , president, Teradata Labs. “Teradata Intelligent Memory technology is built into the data warehouse and customers don’t have to buy a separate appliance. Additionally, Teradata enables its customers to configure the exact amount of in-memory capability needed for critical workloads. It can be difficult, expensive, and impractical to keep all data in memory, and Teradata’s unique approach means the right amount of memory can be applied to the right set of data for blinding performance – automatically.”

    “Teradata’s new in-memory architecture is integrated with its management of data temperature,” said Richard Winter , chief executive officer, WinterCorp. “This is very significant, because the hottest data will migrate automatically to the in-memory layer – Teradata Intelligent Memory; the next hottest data will move automatically to solid state disk; and, so on. Teradata also provides the column storage and data compression that amplify the value of data in memory. The customer sees increased performance without having to make decisions about which data is placed in memory.”

    Teradata Intelligent Memory is available to current Teradata workload-specific platforms running the Teradata database. It’s available as part of Teradata Database 14.10, which will be released in the second quarter.

  • Airbnb Forgoes SQL In Favor Of Memcached

    Airbnb Forgoes SQL In Favor Of Memcached

    Airbnb recently shared an engineer Q&A talking about building its Airbnb Neighborhoods feature. Here’s a look at the feature if you’re unfamiliar:

    Introducing Airbnb Neighborhoods from Airbnb on Vimeo.

    Basically, it shows users pages about neighborhoods in cities, so they can decide where they want to stay when they visit.

    You can read the full Q&A here, but here’s the part where they explain why they ultimately went with Memcached over a SQL database or DynamoDB:

    At first, it seemed we wanted a SQL database, as our data had relations. However, this was ruled out based on the need for mass updates. Next, we looked at an in-house NoSQL solution that we call Dyson. Dyson seemed to give us the flexibility we needed with writes and updates, so we tried it. For reference, Dyson is backed by Amazon’s DynamoDB, a reliable, but limited, managed, NoSQL solution. In essence, if we put the data right into DynamoDB, then Dyson can serve it. This led to the creation of a DynamoDB cascading tap. Countless timeouts, headaches and late nights later, we had a working solution.

    However, there was a problem, namely DynamoDB’s 65KB storage limit. When you’re storing uncompressed JSON, that’s a pretty easy target to reach. As a band-aid, we engineered a solution involving pages of tuples. To say this solution was sub-optimal is putting it mildly, and the performance was even worse.

    With launch quickly approaching, brilliant words saved the day: “You don’t need a database, you need a [expletive deleted] cache” 1. So that’s what we did, we traded our database for a cache. Specifically, we switched from Dyson to Memcached.

    Here’s a look at the Neighborhoods admin tool:

    neighborhoods-admin-tool from AirbnbNerds on Vimeo.

    [via GigaOm]

  • Rackspace Acquires MongoDB DBaaS Provider ObjectRocket

    Rackspace announced today that it has entered an agrement to acquire Object Rocket, a MongoDB DBaaS provider.

    A Rackspace spokesperson tells WebProNews, “Every web application requires a database to store customer behavior and other critical data. With this open source-based MongoDB solution, Rackspace will further its open cloud mission and broaden its product portfolio to offer customers a NoSQL database as a service and the ability to handle big data analytics. With this acquisition Rackspace will provide its open cloud customers with a fast and scalable MongoDB offering.”

    Rackspace says that the acquisition will help it establish a strong presence within the NoSQL database market. The company points to a recent report from the 451 Group indicating that NoSQL software revenue is expected to reach $215 million by 2015.

    Rackspace SVP of Corporate Development, Pat Matthews, said, “Databases are the core of any application and expertise in the most popular database technologies will be critical to us delivering Fanatical Support in the open cloud. As we look to expand our open cloud database offering into the MongoDB world, we are really excited to work with the entrepreneurs and engineers at ObjectRocket.”

    ObjectRocket CEO Chris Lalonde added, “ObjectRocket is thrilled to join the Rackspace family. With Rackspace’s open cloud philosophy and our shared emphasis on providing the highest level of customer support, we feel this union is an ideal fit. Since the beginning, our focus has been on creating a DBaaS platform that would perform, scale and support critical workloads in a superior manner. Joining forces with Rackspace will enable us to achieve this goal, while delivering one of the most advanced MongoDB DBaaS solutions on the market.”

    Rackspace’s ObjectRocket offering will be available in early March for Rackspace customers in its Chicago facility. It will soon be deeply integrated across Rackspace’s Open Cloud portfolio.

    It will also continue to be sold as a standalone service.

    The acquisition will close today. Terms were not disclosed.

  • Idera Launches Three New SQL Server Tools

    Idera Launches Three New SQL Server Tools

    Idera announced the availability of three new free tools for SQL Server DBAs and IT administrators: Server Backup Free, SQL Backup Status Reporter and SQL Permissions Extractor Free.

    Idera CEO Rick Pleczko says, “Idera has made a commitment to providing free tools and solutions that help both IT and database administrators better manage their servers. It’s our way of paying back the success that the community has helped us achieve.”

    “Idera’s Server Backup Free delivers a free copy of their leading high performance server backup product,” the company says in an announcement. “Idera’s SQL backup status reporter and SQL permissions extractor allows DBAs to quickly and easily ensure that databases have been backed up and copy permissions across servers. All of these tools are designed to help administrators better manage the growing number of servers in the enterprise.”

    Server Backup Free has all of the features of the Enterprise version, except it’s for a single server. Features include the abiliy to backup physical and virtual servers in just minutes (as opposed to hours), the ability to backup to any disk-based storage (second hard disk, NAS, SAN, etc.), the ability to restore files in seconds with Disk Safe technology, and “easy” installation.

    Backup Status Reporter lets DBAs view a graphical representation of backups across their SQL Server environment. Features include: the ability to identify databases that haven’t had backups, the ability to view backup history (including backup date and type), a simplified grid view for easy sorting and navigation, and the ability to identify full and differential backups for one or many databases.

    Backup status reporter

    SQL Permissions Extractor lets DBAs copy and reassign permissions from one server to another without having to write any script. It can generate T-SQL scripts for copying of user permissions to other servers, and enables the editing, saving and execution of permissions scripts. It also lets users include object level permissions for selected databases.

  • NuoDB Officially Launches Its Cloud-Based Database

    Developers need scalable databases more than ever for their apps. Of course, this presents a problem for those apps that become really popular and need more room to grow. It might prove too costly for some developers to traditionally scale their database, but a new startup is challenging that notion with a cloud-based database that easily scales to developers’ needs.

    NuoDB announced that its new cloud-based database service has officially launched. The new system operates on what the company calls the “12 Rules for a Cloud Data Management System:”

  • Modern Superset of an RDBMS
  • Elastic Scale-out for Extreme Performance
  • Single Logical Database
  • Run Anywhere, Scale Anywhere
  • Nonstop Availability
  • Dynamic Multi-tenancy
  • Active/Active Geo-distribution
  • Embrace Cloud
  • Store Anywhere, Store Redundantly
  • Workload Mix
  • Tunable Durability Guarantees
  • Distributed Security
  • Empower Developers & Administrators
  • With these rules in hand, NuoDB set to change the way developers use databases:

    “After a comprehensive trial and intense listening to our 3,500 beta customers, NuoDB is ready for market,” stated Barry Morris. “Our Cloud Data Management System is a game changer. We are challenging the industry to offer comparable elastic scaling, over 1 million transactions per second performance on just $50,000 worth of commodity hardware as published in our latest benchmark report, as well as the many other cloud-friendly features found in our NuoDB Starlings release.”

    For the more visual-oriented among us, here’s a helpful video that explains what makes NuoDB different from the rest:

    NuoDB in 90 seconds from NuoDB on Vimeo.

    NuoDB is already in use by a number of large clients who are using it to build out expansive, scalable databases. One in particular, NorthPoint, has nothing but praise for NuoDB’s service:

    “NuoDB is launching a revolution in the database world by leveraging commodity storage. For the first time, an application’s storage requirements can be satisfied using a pool of shared resources, operating in a fully-elastic way, as the nature of cloud computing requires,” said Richard Cooley, Managing Partner, NorthPoint. “The database is no longer a monolithic, resource-intensive burden with limited scalability. While still maintaining the documented benefits of ACID and SQL, NuoDB is about as disruptive as it gets.”

    Developers have three options to choose from when it comes to deploying NuoDB. The first is a free option that offers 2 hosts, 4GB of storage, unlimited databases and limited deployment. The second is the Pro version that supports 2 or more hosts, 16GB to multi-petabytes, full deployment and starts at $1,200 a year. Finally, there’s the developer version that offers all the benefits of Pro, but lacks deployment options. It’s free, however, so developers can just go crazy with it.

    [h/t: Computer World]