Altabel Group's Blog

Archive for the ‘Cloud’ Category

It’s impossible to deny the amazing rise of Chrome OS. This Linux-based platform was the ideal solution at the ideal time. The cloud proved itself not only a viable option but, in many cases, the most optimal option. The puzzle was simple to solve:

Create a cost-effective platform that blended seamlessly with the cloud.

Linux? Are you listening? Now is your chance. All of the pieces are there, you just have grab the golden ring before Microsoft does.

One of the main reasons why Chrome OS has succeeded is Google. Google not only has the cash to spend on the development of such a product, it also has the momentum of brand behind it (and the “Google” brand no less). Even without this, Linux could follow in the footsteps of Google and create their own cloud-based OS.

But why?

The answer to that is also simple: Because Linux needs (in one form or another) a major win in the desktop arena. It now has the streed cred (thanks to Android and Chrome OS — both of which are built on a Linux kernel), so all it needs is to deliver something… anything… to build on the momentum. I think that thing could be a cloud-based platform. These platforms have already proven their worth, and people are buying them up. Since cheap (read “free”) has been one of the many calling cards for Linux, it’s a perfect fit.

I’ve installed Linux on a Chromebook (Bodhi Linux on an Acer C720). The marriage of a full-blown Linux distribution and the Chromebook was fantastic. You could hop onto your Google account and work magic — or to one-up Chrome OS, you could work on the many local apps. That’s where a cloud-based Linux device could help solidify both the cloud ecosystem and the Linux platform… the best of both worlds.

To this end, three things need to happen:

  • Canonical needs to re-focus on the desktop (or in this case, a cloud-based iteration)
  • A hardware vendor needs to step up and take a chance on this platform idea
  • Open Xchange needs to work with the distribution to create a seamless experience between the platform and the cloud system

It’s a lot to ask, especially on Canonical’s end (with them focusing so much effort on the Ubuntu Phone and Mir). But with their goal of convergence, getting Ubuntu Linux cloudbook-ready shouldn’t be a problem. As for Open Xchange, I would imagine them welcoming this opportunity. At the moment, the OX App suite is a quality product living its life in obscurity. A Linux-based “cloudbook” (please do not call it a Linbook) could change that. The hardware side of things is simple, because it’s already been proved that Linux will run on nearly every one of the available Chromebooks (and it should, since Chrome OS uses the Linux kernel).

I say all of this as an avid Chromebook user. I find the minimal platform a refreshing change that’s both incredibly easy to use and efficiently helps me get my work done with minimal distraction. There are times, however, I would love to have a few local apps (like The Gimp, for example). With a Linux cloudbook, this would not only be possible, it would be easy. In fact, you would find plenty of apps that could be installed and run locally (without sucking up too much local storage space).

The cloudbook could very well be the thing that vaults Linux into the hands of the average user, without having to stake its claim on Chrome OS or Android. And with the Linux cloudbook in the hands of users, the door for the Ubuntu Phone will have been opened and ready to walk through. Convergence made possible and easy.

The desktop, the cloudbook, the phone.

Is the cloudbook a path that Linux should follow — or would the overwhelming shadow of Google keep it neatly tucked away from the average consumer and success? Let us know your thoughts in the discussion thread below.

Taken from TechRepublic

 1176dce

Romanas Lauks
Romanas.Lauks@altabel.com
Skype ID: romanas_lauks
Marketing Manager (LI page)
Altabel Group – Professional Software Development

In the last year, Google has stampeded toward the enterprise. With advancements in Chromebooks and Chromeboxes, improved security, and incentive pricing; it’s obvious that Google is working hard to build out its portfolio of enterprise customers.

Another product that Google has been making more accessible to its business customers is its Cloud Platform. While Google has added value with new features, it is still uncertain whether or not it will be able to compete in a market dominated by Amazon AWS and Microsoft Azure.

The Google Cloud Platform is Google’s infrastructure-as-a-service where users can host and build scalable web applications. The Cloud Platform is technically a group of tools that cover the gamut of what most people need to build a business online. Currently, these are the tools that make up the Cloud Platform:

  • Google App Engine
  • Google Compute Engine
  • Google Cloud Storage
  • Google Cloud Datastore
  • Google Cloud SQL
  • Google BigQuery
  • Google Cloud Endpoints
  • Google Cloud DNS

Brian Goldfarb, head of marketing for the Google Cloud Platform, said that Google is working to leverage its “history and investments” in data centers and data processing technology to bring what they have learned to the public. The most exciting part for Goldfarb is the breadth of possibilities that the infrastructure provides for businesses.

“The beauty of being an infrastructure provider is that the use cases are, essentially, limitless,” Goldfarb said.

At the 2014 Google I/O developer conference keynote, Urs Hölzle and Greg DeMichillie announced a few more developer tools for Cloud Platform users. Google Cloud Dataflow is a way to create data pipelines that succeeds MapReduce. They also introduced a few minor tools such as Cloud Save, Cloud Debugger, and Cloud Trace.

According to James Staten, an analyst at Forrester, Google has been building its cloud offerings out for a while, but it has struggled to differentiate its products from its competitors.

“They continue to unveil some interesting things for developers, particularly those that are doing big data, which seems to be their only major differentiation as a cloud platform right now. So, they’re building on that,” Staten said.

When it comes to the numbers that Forrester has on cloud platform users, Google isn’t at the bottom of the list, but they are no where near the top five because of its lack of differentiation.

According to Goldfarb, however, Google differentiates itself in three key ways:

1. Price and performance. Google offers automatic discounting and unique aspects in its business model for the Cloud Platform.

2. Technical capability. “We are a cloud first company,” Goldfarb said. He notes that Google builds tools for their engineers to work on cloud production, which then get translated to the public-facing products.

3. Innovation. Customers will be the first to receive what Goldfarb calls “unique competitive advantages,” new technical features as soon as they are created by Google. For example, when speaking of the new Cloud Dataflow he said, “There is nothing like it in the world.”

Still, one of the primary issues is that the Google Cloud Platform wasn’t initially geared to accommodate bigger enterprises.

As a platform-as-a-service, it primarily appealed to startups as it only supported Python and didn’t have as robust an offering as needed by bigger companies. According to Staten, enterprises code not only in Python, but in PHP, Ruby, and Java as well; and if you only support one of those, it’s not very appealing.

Of course, Google has grown to accommodate other languages, and the appeal has gone up slightly; but, Staten said that Google still only has the basics. He said the real value for cloud platforms today is the ecosystem surrounding the infrastructure, and Google doesn’t yet have the ecosystem around the the Cloud Platform that it needs to be competitive.

“The battle is no longer around base infrasture-as-a-service,” Staten said. “It’s not about how many data centers you have, how fast those compute instances are and so forth. It’s all a battleground now around the services that are available above and beyond that platform and, more importantly, the ecosystem around those services.”

This is part of the reason why enterprise customers go to AWS or Azure. They go to those platforms because their peers are using it. They can draw on the experiences of their colleagues and peers for advice and best practices. Staten also notes that there are tons of available partners that many enterprises already know, and are already comfortable with. Some businesses are simply more comfortable working with companies such as Amazon, IBM, RackSpace, and Microsoft.

Still, some companies do trust their cloud offerings to Google. While its portfolio may not include as many Fortune 500 companies as some of its competitors, Google still boasts the likes of Khan Academy, Rovio, Gigya, Pulse, and Snapchat.

“Our fundamental goal with partners in the ecosystem is to empower them,” Goldfarb said.

Goldfarb noted that working with its partner ecosystem and engaging the open source community are some of Google’s highest priorities. He also believes that the heavy focus on open source is also a differentiator for Google among it competitors.

The first step, Staten said, is for Google to make a play around it’s existing products. For an ecosystem to grow and flourish, Google will need to give potential Cloud Platform customers a reason to use their other products.

“Right now if I want to build Android applications, or I want to extend the Google applications, or I want to take advantage of any Google technologies, there’s not a compelling reason for me to do that on their Cloud Platform,” he said. “In fact, it’s going to be easier, and more effective, for me to do that on Amazon or any of the other cloud platforms that are out there.”

Conversely, Google also needs to focus on getting companies that are using its other products to use the Cloud Platform as well. Google needs a sticky value proposition if they want a strong enterprise appeal. Staten mentioned that this could play out as a suite offering or something similar.

It’s not that Google has a poor reputation among business customers. The bigger issue is that most of these incumbent enterprise partners have built a deeper trust among the enterprise by working with them for so long. In order to further build trust, Google will need to take a serious look at its ad-heavy revenue model.

Staten said, “the enterprise hates advertising. So, they’re very much on the antithesis of the Google historical model.” Which means that Google will have to change its approach to accommodate more enterprise customers, so that it’s known as more than just an advertising company. That could even serve to help diversity Google’s revenue model.

Google has done a good job, so far, with much of its pricing and aggressiveness going after deals, but there are some things it can do to better its interactions with the enterprise.

“The biggest thing for Google is understanding that having a relationship with an enterprise is way different than having a relationship with a consumer,” Staten said.

What Staten believes is that Google doesn’t sell like an enterprise sales organization. Enterprise customers don’t want to operate within a consumer-style sales model. Business customers value things like a specific, named sales rep that they can easily contact.

Enterprise customer also tend to be more apt to go where they can get customized support. They need customer support that doesn’t involve getting in line behind thousand of consumers with the same questions, and they, rightfully, expect the potential for custom SLAs. But, according to Goldfarb, Google recognizes the difference between enterprise and consumer customers.

“We’ve done a lot of the last 12 months to build out or enterprise sales and services support,” Goldfarb said.

Regarding enterprise customers of the Cloud Platform, Google offers a technical account management team with the potential for business customers to get connected to a specific, named sales representative. Goldfarb also mentioned a 24/7 multi-language support system and a team of more than 1,000 people dedicated to handling enterprise accounts.

According to Staten, Google certainly can compete with AWS and Azure, but they have some catching up to do if they want to be truly competitive.

“I think they are making some progress, but they probably are not making it as fast as they think they need to in this market,” Staten said. “What they have to do is balance catching up with Amazon, with differentiating their offering. That balance is tricky, and it’s not entirely obvious where that balance is.”

What do you think? Do you think the Google Cloud Platform can compete with products like AWS and Azure? Do you think Google is doing enough to accommodate enterprise customers?

Lina Deveikyte

Lina Deveikyte
Lina.Deveikyte@altabel.com 
Skype ID: lina_deveikyte
Marketing Manager (LI page)
Altabel Group – Professional Software Development

Despite ongoing concerns about compliance and governance, the public cloud offers tempting benefits for some use cases. Here are the ones worth serious consideration.

Public cloud solutions remain mired in a sea of distrust because of their inability to overcome enterprise governance and reliability concerns. Yet, these solutions are still finding inroads into enterprises if they can present specific business solutions to line of business managers who are championing them. In today’s business settings, where are public cloud solutions most likely to succeed, and what can public cloud providers learn from this adoption to enhance their chances for future adoption?

First, offer a solution that delivers economy that enterprises can’t resist!

Several public cloud solutions are gaining traction in this area. Among them are:

#1 Application testing and staging

Public cloud IaaS (infrastructure as a service) enables enterprises to forego building new data centers or expanding existing ones. They do this by offloading their application development, testing and staging to third-party cloud providers. Since they can pay a baseline subscription that increments or decrements on a pay-as-you-go basis, enterprises incur no new capital expenses and they also reduce the risk of resources that sit idle during times when application development, testing and staging activities are slow. As long as a cloud provider has governance and data protection policies that meet enterprise standards, outsourcing is an option that can be extremely attractive to CIOs and CFOs.

#2 Temporary processing and storage needs

During peak processing times like the holiday retail season, enterprises can increment processing and storage by “renting” the resources they need from the cloud. The financial benefit is much the same as it is for application testing and staging.

#3 Data archiving

Again assuming that the cloud provider can meet corporate governance standards, some enterprises are opting to offload historical data from their data centers to the cloud. This assumes that the data will not be needed for big data trends analytics, and is for long term storage purposes only.

#4 Virtual Desktop Infrastructure (VDI)

The jury is still out on VDI, which began as a “hot” idea to reduce office software licensing fees, but resulted in both performance and management issues for VDI–but it is still on corporate CIOs’ radars.

Next, offer a solution that solves an issue that enterprises can’t solve on their own!

#5 Supplier management

ERP (enterprise requirements planning system) was designed for internal processes and operational integration within the walls of the enterprise. Unfortunately, businesses going global need to manage thousands of suppliers worldwide through a series of external business processes and data exchanges that their internal systems are ill-suited for. A number of cloud-based providers are making a splash in the supply chain area by offering integrated networks of suppliers and companies—all with secure access to a uniform data repository.

#6 Back-office optimization

So much work has gone into revenue generation that enterprises still find themselves losing on profit margins because of inefficient back-office operations that eat up profits, and that they can’t seem to fix. Especially in industries like brokerage and financial services, there are now cloud-based analytics solutions that determine where back-office “profit bleed” is occurring—and stop it.

#7 Sales force management

Field-based operations like sales are another example of an external business function that is difficult for traditional enterprise systems to address. A plethora of cloud-based solutions are being utilized by enterprises that enable real time access to sales management and customer relationship management systems, giving everyone in sales, marketing, service and the C-Suite 360-degree visibility of the customer and of sales progress.

#8 Project management and collaboration

Project management activities in enterprises have suffered for years because of inefficient and monolithic project management systems that depended on a central project administrator to keep tasks updated as information came in. Needless to say, the accuracy of project status suffered—often spelling disaster for project timelines and deliverables. Now there are cloud-based solutions that link together every project participant and stakeholder, enabling real time updates to projects and real time collaboration that project managers have never seen before.

While these use cases are promising for public cloud providers, it doesn’t change the fact that many public cloud providers are still struggling to attain the market shares they want because of continuing enterprise skepticism over the strength of their governance—and their ability to deliver solutions that are significantly better than what the enterprise already has. No doubt, these perceptions will continue to haunt public cloud providers in the near term. This makes it more important than ever to fill a need that enterprises can’t meet—or to deliver a cost savings proposition that is so compelling that it is impossible to ignore.

 

Lina Deveikyte

Lina Deveikyte
Lina.Deveikyte@altabel.com
Skype ID: lina_deveikyte
Marketing Manager (LI page)
Altabel Group – Professional Software Development

The pundits would have you believe there is a popular debate and a difficult decision among IT architects – whether to go with a private cloud deployment, public cloud deployment, or a hybrid combination. They say the decision comes down to factors that are individual to each organization. But the truth is, there really is no debate at all (at least there shouldn’t be).

Private cloud is inefficient. It is built on a model that encourages bad overprovisioning. In fact in order to get maximum benefit from private cloud – true elasticity – you have to overprovision. The public cloud, on the other hand, is the most widely applicable and delivers the most value to a majority of businesses.

Here is why the public cloud should be your only consideration:

#1 The need for regulatory compliance. Security or privacy regulations and audits are often years behind the industry, but their rules can be challenged. We’ve seen customers exceeding auditors’ expectations, make a case for their architecture, and win the day, providing them with all the benefits of a public cloud architecture with all the security needed by common regulatory requirements, even HIPAA, SOX, or DOD standards. This is hard to replicate with private clouds, because with internal data protection you are going to have internal SLAs and internal compliance checklists, which require frequent upkeep, higher costs and a more complicated infrastructure.

#2 Start-up companies need the public cloud. These companies are often involved in development with uncertain requirements. They don’t know what they might need day-to-day. And many can be on a very tight timeline to get their products to market. These situations mandate a public cloud deployment, like AWS, where more or less resources can be configured and absorbed in a matter of minutes. While they might maintain a small infrastructure onsite, the majority of their infrastructure simply has to be in the public cloud.

#3 Security needs to be a primary concern for any cloud-based deployment. Web and cloud security can change very quickly; and some perceive a public cloud infrastructure to be more vulnerable than a private cloud, but that’s actually a misconception. A private cloud allows IT to control the perimeter; but it’s also responsible for staying on top of a rapidly shifting security landscape and making all required fixes, updates, and upgrades. Public clouds take care of all that. Data is protected by both managed security on a software and physical level, since large-scale data centers like those used by public cloud providers have state-of-the-art security. For example, more than half of the U.S. Government has moved to the public cloud; and surprisingly the banking industry holds the most activity (64 percent) in the public cloud – over social media, online gaming, photo applications, and file sharing. [IT Consultants’ Insight on Business Technology, NSK Inc., "7 Statistics You Didn’t Know About Cloud Computing."]

#4 The need for redundancy and disaster recovery. To truly make a private cloud redundant, you need to host virtual mirrors of the entire infrastructure across multiple hosted providers, which can be public clouds themselves. To keep it completely private, organizations need to run those data centers itself – a vastly expensive proposition. There really isn’t a better choice for this scenario than a well architected cloud deployment. Taking AWS as an example, this cloud can be incredibly redundant if you take advantage of its lesser known features. Region-to-region redundancy, for instance, means the infrastructure is backed up not just in different data centers in the same general region (like the US Northeast, for example), but also in a second, removed region (such as the Pacific Northwest). Many AWS customers don’t even consider this and feel that multiple zones in the same region are enough. That’s possible, but opting for region-to-region puts data and virtual infrastructure in two very different locations, and should anything happen to one, the odds are very small that anything happened to the other. AWS can get very granular with such deployments, too, offering around the world redundancy and even ensuring that certain data centers are located on different seismic plates. This can be mirrored with a private cloud deployment, but the cost is colossal.

#5 Which brings us to the issue of cost. Budget is, of course, a huge factor in this decision and becomes a highly individual consideration with multiple factors that can affect a decision. Companies with large amounts of infrastructure already installed might find it cheaper to implement a private cloud, since in many cases they already have not only the hardware but also the operating systems and management tools required to build a private cloud. But the flip side is that hardware infrastructure, and the demands made on it by software, especially operating systems, changes about every 3-5 years.

Public cloud deployments are entirely virtual, which means the hardware hosting those virtual machines is irrelevant because it’s on the provider to keep that infrastructure current. That represents significant cost savings long term. Smaller companies that need to stretch their investment as far as it can go will see those benefits right away. These organizations will be very attracted to not only the infrastructure services offered by the public cloud, but also the application-level services offered by partners and other customers of providers like AWS. In this case, an organizations is not only deploying servers in the cloud, it’s feeding end-user applications on a subscription basis, bypassing the cost of software licensing, deployment, and updating. That’s very attractive to companies that want to be agile, regardless of the size of the company, with limited IT resources, and even companies who analyze their annual expenditures and find a public cloud deployment compares favorably to that cost.

Most IT professionals and market researchers contend that while the majority of businesses today are eyeing a hybrid deployment, that’s really because they’re being conservative. Yet we know that data centers are a single point of failure. So can we really afford to be conservative? How many private cloud deployments are fully redundant across multiple physical buildings on separate flood plains and earthquake zones? For the small group that has implemented full redundancy at the data center level – try asking for their hypervisor license bill and their maintenance and support labor costs.

Private vs. public is a hot debate among technical circles, but in most cases, taking a long, careful look at the public cloud will show it to be the best-case answer. Is successful private cloud deployment possible? Of course. Is it efficient? No.

Lina Deveikyte

Lina Deveikyte
Lina.Deveikyte@altabel.com
Skype ID: lina_deveikyte
Marketing Manager (LI page)
Altabel Group – Professional Software Development

The practice of renting virtualised pools of servers and storage over the net is known as infrastructure as a service (IaaS), and is the most popular class of cloud service available today.

But most businesses are only making limited use of IaaS, with the majority restricting their use to spinning up application development and test environments or to rapidly provisioning extra server capacity during periods of heavy demand.

The reasons for this limited adoption are many: concerns about security of data and systems controlled by a third party, worries over the reliability of systems run by a cloud provider and served over the internet, and the premium paid for getting a vendor to provide infrastructure over running it in-house.

But where demand for IT services is uneven, fluctuating between high and low-demand, or where a business needs infrastructure to test applications for a short period or to try out a new endeavour, it can be more cost effective and far quicker to rent infrastructure from a cloud provider than attempting to build it in-house. There are even instances of companies like Netflix, which runs its entire IT operation on Amazon Web Services’ infrastructure.

AWS main IaaS offerings are EC2, which provides compute on demand and S3, which provides storage on demand. EC2 gives companies access to virtual machines or instances running an OS and applications of their choice over the internet, with these instances being configurable and controllable via web service APIs. Alongside and on top of these EC2 and S3 AWS provides a range of cloud offerings related to networking, load balancing, databases, data warehousing and big data analysis, as well as a range of management tools.

At the AWS Enterprise Summit in London on Tuesday businesses broadly outlined the ways they are using AWS today and lessons that can be learned from their experience.

Application development and testing
Developers and testers commonly use a self-service approach to draw computing resources from the likes of AWS EC2, S3 and Amazon’s block level storage service EBS. Typically this will be carried out via a self service portal, such as AWS’ own CloudFormation, or via some other form of API call.

Businesses often create self-service enterprise portals that automatically restrict how much computing resource should be provisioned and for how long based on governance and workflow requirements, and that tag the resources that are appropriate for different teams.

Businesses are using EC2 to enable standard build developer/test workstations, add integrated project management and issue tracking, to run popular source control systems and to drive build servers and continuous integration, according to Yuri Misnik, head of solutions architecture at AWS.

On the testing side EC2 instances are being used to allow unit and regression tests to be scaled up and run in parallel in a fraction of the time of doing it in-house, for A/B scenario testing to be run on replica stacks and for the creation of sandboxes for security testing, said Misnik. For testing how applications perform under load, he said, customers sometimes use spot instances – a pricing model where customers bid for time on unused EC2 instances – as a cost effective way of stressing applications.

AWS has a number of pricing models for renting infrastructure, based on how customers want to use it. For instance on-demand instances let customers pay for compute capacity by the hour with no long term commitments, while reserved instances require a one-time up front payment in return for a significant discount to the hourly rate. Customers can save a lot of money by ensuring the pricing model they use is best suited to their need, said Misnik, citing a customer that saved 45 percent cost by transitioning to reserved instances. AWS provides a tool Trusted Advisor, which makes recommendations on how customers can save money and improve performance or security.

Entertainment conglomerate Lionsgate, has used EC2 to develop and test SAP apps, reducing deployment time from weeks to days, as has Galata Chemicals, which reduced the cost of running development and testing operations by 70 percent by moving to EC2.

Building and running new application
The UK broadcaster Channel 4 launched its first live application on AWS in 2008 and today runs all of its new web apps on the infrastructure.

Describing the benefits of running apps on AWS, Bob Harris, chief technology officer with Channel 4, name-checked agility, scalability and resilience.

“We get servers up and running for teams in minutes, if it’s urgent, or hours,” he said.

The broadcaster sees “a huge increase in productivity” among teams building apps running on AWS, said Harris, because of the development team’s ability to deploy or destroy virtual servers and apps as and when they need to.

That freedom to spin up new instances has a downside, however.

“One of the things it lets you do is be inefficient far more efficiently,” said Harris.

“People tend to start instances, maybe they start more than they need or too big. So we’ve have a constant battle over the past couple of years making sure that we’re keeping our house in order behind us.”

Tools like Trusted Advisor are designed to help keep on top of this problem by flagging up the number of instances being used.

For a broadcaster that has to deal with spikes in traffic to its web sites and apps after popular TV programmes are broadcast, and doesn’t want to have to buy excess capacity for one time peaks in demand, the scalability of AWS was a good fit, said Harris.

“The peaky workloads are the important ones. In the past you had to explain to the marketing manager those 404s were a sign of success because it showed how much traffic came to your website. Today I can’t remember the last time that happened on a Channel 4 website.”

Harris estimates that the total cost of ownership for running these services on AWS is more than five times lower than running it off in-house infrastructure.

He stressed the need to build services that worked well with horizontal scaling across different EC2 instances as demand increased. Licensing of back-end software is another consideration, with Harris saying that there are still difficulties with software vendors being tied to a per machine or per CPU socket licensing model, which is obviously a poor fit for EC2 where software can be running on a varying number of virtual and physical machines based on demand.

“My personal view is that a significant number of proprietary models are simply not cloud-friendly because they don’t allow us to take advantage of that flexibility. Cloud plus open source is really the place you need to be if you want high scalability,” he said.

Proprietary software vendors are beginning to make concessions for running their software in the cloud, with Microsoft, SAP, Oracle and IBM offering licence mobility deals for their major software packages to AWS customers that are a better fit for cloud computing’s scalable pay per use model.

Augment on-premise systems and run hybrid apps
Hybrid apps are those that rely on a mixture of back-end services running on both in-house and AWS infrastructure.

AWS provides multiple features to help companies building hybrid apps integrate their datacentres with AWS infrastructure in a secure fashion, such as AWS Direct Connect and Virtual Private Cloud.

Access controls similar to those within in-house datacentres can be set on AWS infrastructure using its Identity and Access Management tools while AWS CloudHSM (hardware security modules) are ultra secure key management appliances which can be used by customers who have to follow stringent data protection regulations so they are able to move data onto AWS infrastructure.

Another approach taken by businesses is to offload certain systems to AWS: Nokia runs analysis on data stored on Amazon’s Redshift data warehousing platform, allowing it to reduce cost by up to 50 percent and run queries up to twice as fast compared to its previous data warehouse.

Channel 4 is using AWS Elastic Map Reduce (EMR) service, the AWS-hosted Hadoop framework running on EC2 and S3, to analyse web logs from its sites going back a number of years and hone ad-targeting and programme recommendations.

Channel 4’s Harris said that EMR provides a way for the broadcaster to experiment without the commitment of an up front investment, an important consideration when the outcome of big data analysis is uncertain.

“It’s about the cost of exit – how much money have I sunk if I’ve to walk away. In big data we’re all trying to work out ‘What’s the real value of a better ad target?’, it’s a hard analysis to do. Imagine if I wanted to ramp my physical platform by ten or more times, we’re talking tens of millions of pounds. By the time I’ve also hired the half a dozen people to run this thing this is seriously expensive.”

AWS’ Misnik also said some businesses are using AWS infrastructure as a cloud-based disaster recovery site, running anything up to hot standby environments with rapid failover.

Migrating existing apps to the cloud
Migrating apps running on in-house infrastructure to AWS is less common, as it presents a number of challenges.

Matthew Graham-Hyde, CIO of the media conglomerate Kantar Group, said getting a migrated app to work requires both re-engineering the app and working out the right mix of cloud infrastructure it needs to sit upon.

“It’s a very different working model when you take an application and re-engineer it for the cloud,” he said, for instance so it is able to scale across available instances based on demand and exploit the distributed nature of the cloud architecture to become more resilient to failure.

“You have to have everyone in the room – your infrastructure architects, sysadmins, business analysts, developers, consulting partners – and you’re ripping up installation after installation as you re-engineer this application to get the true benefits of being a cloud application. It’s a very iterative model.”

Kantar has migrated a number of apps to AWS, including a third party data visualisation tool whose running costs have dropped by 50 percent since the move.

AWS recommends migrating apps that are under-utilised, that have an immediate need to scale or that simply are the easiest to move. Examples of apps and systems that should prove more straightforward to migrate are, AWS claims: web apps, batch processing systems, content management systems, digital asset management systems, log processing systems, collaborative tools and big data analytics platforms.

Another company that claims to have benefited from shifting existing systems to AWS is the pharmaceutical firm Bristol-Myers Squibb, which migrated its clinical trial simulation platform, with the result that simulation times have been reduced from 60 hours down to 1.3 hours and reduced costs by 60 percent.

Everything in the cloud
Video streaming company Netflix is one of the few firms to have dispensed of its in-house datacentres entirely in favour running its entire infrastructure on top of AWS services.

The spiky nature of customer traffic means Netflix is a good match for the scalability offered by EC2. Netflix uses thousands of EC2 instances in multiple regions and across the various AWS availability zones to support more than 33 million customers worldwide.

Not having to run IT infrastructure has freed up the IT team at Netflix to devote time to improving the performance or features of the company’s IT services. But Netflix is also an example of the amount of work needed to go “all-in” on cloud – the company has devoted a lot of time to making AWS work as a platform for its business (going as far as to develop the Chaos Monkey software that breaks parts of production systems to test overall resiliency), the latency inside a distributed architecture and limitations on compute storage and networking that come with sharing a server’s resources with other customers.

Kristina Kozlova

Kristina Kozlova
Kristina.Kozlova@altabel.com
Skype ID: kristinakozlova
Marketing Manager (LI page)
Altabel Group – Professional Software Development


%d bloggers like this: