Altabel Group's Blog

Archive for the ‘Cloud’ Category

Despite ongoing concerns about compliance and governance, the public cloud offers tempting benefits for some use cases. Here are the ones worth serious consideration.

Public cloud solutions remain mired in a sea of distrust because of their inability to overcome enterprise governance and reliability concerns. Yet, these solutions are still finding inroads into enterprises if they can present specific business solutions to line of business managers who are championing them. In today’s business settings, where are public cloud solutions most likely to succeed, and what can public cloud providers learn from this adoption to enhance their chances for future adoption?

First, offer a solution that delivers economy that enterprises can’t resist!

Several public cloud solutions are gaining traction in this area. Among them are:

#1 Application testing and staging

Public cloud IaaS (infrastructure as a service) enables enterprises to forego building new data centers or expanding existing ones. They do this by offloading their application development, testing and staging to third-party cloud providers. Since they can pay a baseline subscription that increments or decrements on a pay-as-you-go basis, enterprises incur no new capital expenses and they also reduce the risk of resources that sit idle during times when application development, testing and staging activities are slow. As long as a cloud provider has governance and data protection policies that meet enterprise standards, outsourcing is an option that can be extremely attractive to CIOs and CFOs.

#2 Temporary processing and storage needs

During peak processing times like the holiday retail season, enterprises can increment processing and storage by “renting” the resources they need from the cloud. The financial benefit is much the same as it is for application testing and staging.

#3 Data archiving

Again assuming that the cloud provider can meet corporate governance standards, some enterprises are opting to offload historical data from their data centers to the cloud. This assumes that the data will not be needed for big data trends analytics, and is for long term storage purposes only.

#4 Virtual Desktop Infrastructure (VDI)

The jury is still out on VDI, which began as a “hot” idea to reduce office software licensing fees, but resulted in both performance and management issues for VDI–but it is still on corporate CIOs’ radars.

Next, offer a solution that solves an issue that enterprises can’t solve on their own!

#5 Supplier management

ERP (enterprise requirements planning system) was designed for internal processes and operational integration within the walls of the enterprise. Unfortunately, businesses going global need to manage thousands of suppliers worldwide through a series of external business processes and data exchanges that their internal systems are ill-suited for. A number of cloud-based providers are making a splash in the supply chain area by offering integrated networks of suppliers and companies—all with secure access to a uniform data repository.

#6 Back-office optimization

So much work has gone into revenue generation that enterprises still find themselves losing on profit margins because of inefficient back-office operations that eat up profits, and that they can’t seem to fix. Especially in industries like brokerage and financial services, there are now cloud-based analytics solutions that determine where back-office “profit bleed” is occurring—and stop it.

#7 Sales force management

Field-based operations like sales are another example of an external business function that is difficult for traditional enterprise systems to address. A plethora of cloud-based solutions are being utilized by enterprises that enable real time access to sales management and customer relationship management systems, giving everyone in sales, marketing, service and the C-Suite 360-degree visibility of the customer and of sales progress.

#8 Project management and collaboration

Project management activities in enterprises have suffered for years because of inefficient and monolithic project management systems that depended on a central project administrator to keep tasks updated as information came in. Needless to say, the accuracy of project status suffered—often spelling disaster for project timelines and deliverables. Now there are cloud-based solutions that link together every project participant and stakeholder, enabling real time updates to projects and real time collaboration that project managers have never seen before.

While these use cases are promising for public cloud providers, it doesn’t change the fact that many public cloud providers are still struggling to attain the market shares they want because of continuing enterprise skepticism over the strength of their governance—and their ability to deliver solutions that are significantly better than what the enterprise already has. No doubt, these perceptions will continue to haunt public cloud providers in the near term. This makes it more important than ever to fill a need that enterprises can’t meet—or to deliver a cost savings proposition that is so compelling that it is impossible to ignore.

 

Lina Deveikyte

Lina Deveikyte
Lina.Deveikyte@altabel.com
Skype ID: lina_deveikyte
Marketing Manager (LI page)
Altabel Group – Professional Software Development

The pundits would have you believe there is a popular debate and a difficult decision among IT architects – whether to go with a private cloud deployment, public cloud deployment, or a hybrid combination. They say the decision comes down to factors that are individual to each organization. But the truth is, there really is no debate at all (at least there shouldn’t be).

Private cloud is inefficient. It is built on a model that encourages bad overprovisioning. In fact in order to get maximum benefit from private cloud – true elasticity – you have to overprovision. The public cloud, on the other hand, is the most widely applicable and delivers the most value to a majority of businesses.

Here is why the public cloud should be your only consideration:

#1 The need for regulatory compliance. Security or privacy regulations and audits are often years behind the industry, but their rules can be challenged. We’ve seen customers exceeding auditors’ expectations, make a case for their architecture, and win the day, providing them with all the benefits of a public cloud architecture with all the security needed by common regulatory requirements, even HIPAA, SOX, or DOD standards. This is hard to replicate with private clouds, because with internal data protection you are going to have internal SLAs and internal compliance checklists, which require frequent upkeep, higher costs and a more complicated infrastructure.

#2 Start-up companies need the public cloud. These companies are often involved in development with uncertain requirements. They don’t know what they might need day-to-day. And many can be on a very tight timeline to get their products to market. These situations mandate a public cloud deployment, like AWS, where more or less resources can be configured and absorbed in a matter of minutes. While they might maintain a small infrastructure onsite, the majority of their infrastructure simply has to be in the public cloud.

#3 Security needs to be a primary concern for any cloud-based deployment. Web and cloud security can change very quickly; and some perceive a public cloud infrastructure to be more vulnerable than a private cloud, but that’s actually a misconception. A private cloud allows IT to control the perimeter; but it’s also responsible for staying on top of a rapidly shifting security landscape and making all required fixes, updates, and upgrades. Public clouds take care of all that. Data is protected by both managed security on a software and physical level, since large-scale data centers like those used by public cloud providers have state-of-the-art security. For example, more than half of the U.S. Government has moved to the public cloud; and surprisingly the banking industry holds the most activity (64 percent) in the public cloud – over social media, online gaming, photo applications, and file sharing. [IT Consultants’ Insight on Business Technology, NSK Inc., "7 Statistics You Didn’t Know About Cloud Computing."]

#4 The need for redundancy and disaster recovery. To truly make a private cloud redundant, you need to host virtual mirrors of the entire infrastructure across multiple hosted providers, which can be public clouds themselves. To keep it completely private, organizations need to run those data centers itself – a vastly expensive proposition. There really isn’t a better choice for this scenario than a well architected cloud deployment. Taking AWS as an example, this cloud can be incredibly redundant if you take advantage of its lesser known features. Region-to-region redundancy, for instance, means the infrastructure is backed up not just in different data centers in the same general region (like the US Northeast, for example), but also in a second, removed region (such as the Pacific Northwest). Many AWS customers don’t even consider this and feel that multiple zones in the same region are enough. That’s possible, but opting for region-to-region puts data and virtual infrastructure in two very different locations, and should anything happen to one, the odds are very small that anything happened to the other. AWS can get very granular with such deployments, too, offering around the world redundancy and even ensuring that certain data centers are located on different seismic plates. This can be mirrored with a private cloud deployment, but the cost is colossal.

#5 Which brings us to the issue of cost. Budget is, of course, a huge factor in this decision and becomes a highly individual consideration with multiple factors that can affect a decision. Companies with large amounts of infrastructure already installed might find it cheaper to implement a private cloud, since in many cases they already have not only the hardware but also the operating systems and management tools required to build a private cloud. But the flip side is that hardware infrastructure, and the demands made on it by software, especially operating systems, changes about every 3-5 years.

Public cloud deployments are entirely virtual, which means the hardware hosting those virtual machines is irrelevant because it’s on the provider to keep that infrastructure current. That represents significant cost savings long term. Smaller companies that need to stretch their investment as far as it can go will see those benefits right away. These organizations will be very attracted to not only the infrastructure services offered by the public cloud, but also the application-level services offered by partners and other customers of providers like AWS. In this case, an organizations is not only deploying servers in the cloud, it’s feeding end-user applications on a subscription basis, bypassing the cost of software licensing, deployment, and updating. That’s very attractive to companies that want to be agile, regardless of the size of the company, with limited IT resources, and even companies who analyze their annual expenditures and find a public cloud deployment compares favorably to that cost.

Most IT professionals and market researchers contend that while the majority of businesses today are eyeing a hybrid deployment, that’s really because they’re being conservative. Yet we know that data centers are a single point of failure. So can we really afford to be conservative? How many private cloud deployments are fully redundant across multiple physical buildings on separate flood plains and earthquake zones? For the small group that has implemented full redundancy at the data center level – try asking for their hypervisor license bill and their maintenance and support labor costs.

Private vs. public is a hot debate among technical circles, but in most cases, taking a long, careful look at the public cloud will show it to be the best-case answer. Is successful private cloud deployment possible? Of course. Is it efficient? No.

Lina Deveikyte

Lina Deveikyte
Lina.Deveikyte@altabel.com
Skype ID: lina_deveikyte
Marketing Manager (LI page)
Altabel Group – Professional Software Development

The practice of renting virtualised pools of servers and storage over the net is known as infrastructure as a service (IaaS), and is the most popular class of cloud service available today.

But most businesses are only making limited use of IaaS, with the majority restricting their use to spinning up application development and test environments or to rapidly provisioning extra server capacity during periods of heavy demand.

The reasons for this limited adoption are many: concerns about security of data and systems controlled by a third party, worries over the reliability of systems run by a cloud provider and served over the internet, and the premium paid for getting a vendor to provide infrastructure over running it in-house.

But where demand for IT services is uneven, fluctuating between high and low-demand, or where a business needs infrastructure to test applications for a short period or to try out a new endeavour, it can be more cost effective and far quicker to rent infrastructure from a cloud provider than attempting to build it in-house. There are even instances of companies like Netflix, which runs its entire IT operation on Amazon Web Services’ infrastructure.

AWS main IaaS offerings are EC2, which provides compute on demand and S3, which provides storage on demand. EC2 gives companies access to virtual machines or instances running an OS and applications of their choice over the internet, with these instances being configurable and controllable via web service APIs. Alongside and on top of these EC2 and S3 AWS provides a range of cloud offerings related to networking, load balancing, databases, data warehousing and big data analysis, as well as a range of management tools.

At the AWS Enterprise Summit in London on Tuesday businesses broadly outlined the ways they are using AWS today and lessons that can be learned from their experience.

Application development and testing
Developers and testers commonly use a self-service approach to draw computing resources from the likes of AWS EC2, S3 and Amazon’s block level storage service EBS. Typically this will be carried out via a self service portal, such as AWS’ own CloudFormation, or via some other form of API call.

Businesses often create self-service enterprise portals that automatically restrict how much computing resource should be provisioned and for how long based on governance and workflow requirements, and that tag the resources that are appropriate for different teams.

Businesses are using EC2 to enable standard build developer/test workstations, add integrated project management and issue tracking, to run popular source control systems and to drive build servers and continuous integration, according to Yuri Misnik, head of solutions architecture at AWS.

On the testing side EC2 instances are being used to allow unit and regression tests to be scaled up and run in parallel in a fraction of the time of doing it in-house, for A/B scenario testing to be run on replica stacks and for the creation of sandboxes for security testing, said Misnik. For testing how applications perform under load, he said, customers sometimes use spot instances – a pricing model where customers bid for time on unused EC2 instances – as a cost effective way of stressing applications.

AWS has a number of pricing models for renting infrastructure, based on how customers want to use it. For instance on-demand instances let customers pay for compute capacity by the hour with no long term commitments, while reserved instances require a one-time up front payment in return for a significant discount to the hourly rate. Customers can save a lot of money by ensuring the pricing model they use is best suited to their need, said Misnik, citing a customer that saved 45 percent cost by transitioning to reserved instances. AWS provides a tool Trusted Advisor, which makes recommendations on how customers can save money and improve performance or security.

Entertainment conglomerate Lionsgate, has used EC2 to develop and test SAP apps, reducing deployment time from weeks to days, as has Galata Chemicals, which reduced the cost of running development and testing operations by 70 percent by moving to EC2.

Building and running new application
The UK broadcaster Channel 4 launched its first live application on AWS in 2008 and today runs all of its new web apps on the infrastructure.

Describing the benefits of running apps on AWS, Bob Harris, chief technology officer with Channel 4, name-checked agility, scalability and resilience.

“We get servers up and running for teams in minutes, if it’s urgent, or hours,” he said.

The broadcaster sees “a huge increase in productivity” among teams building apps running on AWS, said Harris, because of the development team’s ability to deploy or destroy virtual servers and apps as and when they need to.

That freedom to spin up new instances has a downside, however.

“One of the things it lets you do is be inefficient far more efficiently,” said Harris.

“People tend to start instances, maybe they start more than they need or too big. So we’ve have a constant battle over the past couple of years making sure that we’re keeping our house in order behind us.”

Tools like Trusted Advisor are designed to help keep on top of this problem by flagging up the number of instances being used.

For a broadcaster that has to deal with spikes in traffic to its web sites and apps after popular TV programmes are broadcast, and doesn’t want to have to buy excess capacity for one time peaks in demand, the scalability of AWS was a good fit, said Harris.

“The peaky workloads are the important ones. In the past you had to explain to the marketing manager those 404s were a sign of success because it showed how much traffic came to your website. Today I can’t remember the last time that happened on a Channel 4 website.”

Harris estimates that the total cost of ownership for running these services on AWS is more than five times lower than running it off in-house infrastructure.

He stressed the need to build services that worked well with horizontal scaling across different EC2 instances as demand increased. Licensing of back-end software is another consideration, with Harris saying that there are still difficulties with software vendors being tied to a per machine or per CPU socket licensing model, which is obviously a poor fit for EC2 where software can be running on a varying number of virtual and physical machines based on demand.

“My personal view is that a significant number of proprietary models are simply not cloud-friendly because they don’t allow us to take advantage of that flexibility. Cloud plus open source is really the place you need to be if you want high scalability,” he said.

Proprietary software vendors are beginning to make concessions for running their software in the cloud, with Microsoft, SAP, Oracle and IBM offering licence mobility deals for their major software packages to AWS customers that are a better fit for cloud computing’s scalable pay per use model.

Augment on-premise systems and run hybrid apps
Hybrid apps are those that rely on a mixture of back-end services running on both in-house and AWS infrastructure.

AWS provides multiple features to help companies building hybrid apps integrate their datacentres with AWS infrastructure in a secure fashion, such as AWS Direct Connect and Virtual Private Cloud.

Access controls similar to those within in-house datacentres can be set on AWS infrastructure using its Identity and Access Management tools while AWS CloudHSM (hardware security modules) are ultra secure key management appliances which can be used by customers who have to follow stringent data protection regulations so they are able to move data onto AWS infrastructure.

Another approach taken by businesses is to offload certain systems to AWS: Nokia runs analysis on data stored on Amazon’s Redshift data warehousing platform, allowing it to reduce cost by up to 50 percent and run queries up to twice as fast compared to its previous data warehouse.

Channel 4 is using AWS Elastic Map Reduce (EMR) service, the AWS-hosted Hadoop framework running on EC2 and S3, to analyse web logs from its sites going back a number of years and hone ad-targeting and programme recommendations.

Channel 4′s Harris said that EMR provides a way for the broadcaster to experiment without the commitment of an up front investment, an important consideration when the outcome of big data analysis is uncertain.

“It’s about the cost of exit – how much money have I sunk if I’ve to walk away. In big data we’re all trying to work out ‘What’s the real value of a better ad target?’, it’s a hard analysis to do. Imagine if I wanted to ramp my physical platform by ten or more times, we’re talking tens of millions of pounds. By the time I’ve also hired the half a dozen people to run this thing this is seriously expensive.”

AWS’ Misnik also said some businesses are using AWS infrastructure as a cloud-based disaster recovery site, running anything up to hot standby environments with rapid failover.

Migrating existing apps to the cloud
Migrating apps running on in-house infrastructure to AWS is less common, as it presents a number of challenges.

Matthew Graham-Hyde, CIO of the media conglomerate Kantar Group, said getting a migrated app to work requires both re-engineering the app and working out the right mix of cloud infrastructure it needs to sit upon.

“It’s a very different working model when you take an application and re-engineer it for the cloud,” he said, for instance so it is able to scale across available instances based on demand and exploit the distributed nature of the cloud architecture to become more resilient to failure.

“You have to have everyone in the room – your infrastructure architects, sysadmins, business analysts, developers, consulting partners – and you’re ripping up installation after installation as you re-engineer this application to get the true benefits of being a cloud application. It’s a very iterative model.”

Kantar has migrated a number of apps to AWS, including a third party data visualisation tool whose running costs have dropped by 50 percent since the move.

AWS recommends migrating apps that are under-utilised, that have an immediate need to scale or that simply are the easiest to move. Examples of apps and systems that should prove more straightforward to migrate are, AWS claims: web apps, batch processing systems, content management systems, digital asset management systems, log processing systems, collaborative tools and big data analytics platforms.

Another company that claims to have benefited from shifting existing systems to AWS is the pharmaceutical firm Bristol-Myers Squibb, which migrated its clinical trial simulation platform, with the result that simulation times have been reduced from 60 hours down to 1.3 hours and reduced costs by 60 percent.

Everything in the cloud
Video streaming company Netflix is one of the few firms to have dispensed of its in-house datacentres entirely in favour running its entire infrastructure on top of AWS services.

The spiky nature of customer traffic means Netflix is a good match for the scalability offered by EC2. Netflix uses thousands of EC2 instances in multiple regions and across the various AWS availability zones to support more than 33 million customers worldwide.

Not having to run IT infrastructure has freed up the IT team at Netflix to devote time to improving the performance or features of the company’s IT services. But Netflix is also an example of the amount of work needed to go “all-in” on cloud – the company has devoted a lot of time to making AWS work as a platform for its business (going as far as to develop the Chaos Monkey software that breaks parts of production systems to test overall resiliency), the latency inside a distributed architecture and limitations on compute storage and networking that come with sharing a server’s resources with other customers.

Kristina Kozlova

Kristina Kozlova
Kristina.Kozlova@altabel.com
Skype ID: kristinakozlova
Marketing Manager (LI page)
Altabel Group – Professional Software Development

Developers are in a unique position to educate and to capitalize on cloud opportunities. Unlike learning new programming techniques or Frameworks, cloud learning moves beyond development. There are infrastructure aspects to consider as well as potential organizational process and policy changes. However, developers know the application and cloud administration is a much lower bar, than, for example network administration. If you’re looking for a strategy to follow to cloud enlightenment; you’re reading the right article.

Give the Cloud a Whirl
When it comes to the cloud, don’t wait for the storm to hit you, but rather educate yourself; there is no substitute for experimentation and hands-on experience. Start by separating reality from marketing. Almost every cloud vendor offers a free trial. For example: Microsoft Azure offers a free trial. If you are truly new to cloud development; imagine borrowing a company server for 3 months; only there is no setup time. Just turn in on and away you go.

Given that experimentation time is limited; go for breadth rather than depth. Get a taste of everything. What most developers find is; after some initial orientation and learning the experience becomes what they already know. For example: Azure has an ASP.NET based hosting model called Web Roles. After configuring and learning Web Role instrumentation, the development experience is ASP.NET. Learning Azure Web Roles amounts to learning some new administration and configuration skills; coupled with a handful of new classes. The rest of what you need to know is nothing new if you’ve done ASP.NET!

Developer must keep their time constrained. Struggling for hours with something new is often not worth the effort. One should question wide adoption of something that will be difficult to work with. Cloud offerings are typically not niche or differentiating skills like, for example, SQL Server tuning.

Whatever cloud option a developer starts with; understand the authentication options. Intranet developers typically take authentication for granted. ASP.NET makes authentication look easy. Consider all the moving parts involved in making authentication automatic and secure. Understanding authentication is especially important if parts of an application will live within the organization’s datacenter and within the cloud provider.

Finally, look for the right opportunities to apply these new skills.

Navigating the Fog
Most developers are adept at picking when to jump on new technology and when to pull back. Unlike adopting, for example, a new Web Services approach; adopting a cloud option entails learning a little more administration. The cloud can give a developer total control, but the cost is learning a bit more administration.

Developers may find themselves in new territory here. Typically a “hardware person” selects a machine and a “network person” selects and configures a firewall. Cloud portals make network and server configuration easier, but the portal doesn’t eliminate the configuration role. The public cloud handles the hardware; but the developer must choose, for example, how many; CPUs, servers, and load balancers will be needed. This lowers the administration bar, but also might place the burden on the developer.

The cloud will not be the right option for every project. Give the cloud a fair chance. Decision makers may have two reactions to cloud; outright rejection or wild-eyed embrace. Neither reaction is healthy. There is middle-ground. Don’t let unrealistic expectations set by marketing brochures guide the first project. A developer’s experiences described earlier in the article will be helpful here. Set the bar low. Make the first experience a good experience.

Supplementing with the Cloud
One potential approach is to supplement with the cloud. Let the cloud handle some part of the application. For example: requirements may dictate a web page to handle user registration. Registrations often have deadlines and, given human nature, people often procrastinate. Registration traffic is likely to spike the week or a few days before the deadline. Rather than purchasing servers to accommodate the spike; leaving usage idle for most of the year, do registration in the cloud. Dial up more servers the week before registrations are due and dial the server could back down the week after registrations are due.

Aside from technical change; cloud adoption may require organizational change.

Clouds Don’t Work in a Vacuum
I would bet good money that most developers reading this article have no idea which ports in their organization are closed to incoming TCP/IP connections. However knowing who to ask is far more important than what is known. In some sense every organization is its own private cloud. Networking professionals have been connecting things together longer than developers. Internet performance is considerably different than Intranet performance. Cultivate relationships with whoever operates your Firewall.

Passing through a Firewall is overhead. Your organization’s infrastructure may not be cloud ready. Though if your network people banter about DMZs; chances are your organization’s infrastructure is probably cloud ready. As stated earlier authentication is important to cover; forcing users to authenticate multiple times within an application is intolerable to most users.
Budgeting for servers may be different than budgeting for compute cycles. There may be concern over whether compute cycles will amount to more than purchasing a server or two. There is no shortcut here. Just like any other budgeting a developer must do the math. Again, this may be new territory for developers. Typically developers aren’t asked how much storage an application requires. Typically the storage cost is spread throughout the projects an organization conducts. Budgeting difficulties may be a good reason not to do a project. The upside is; after doing the math a developer will likely find that costs are far below buying the hardware.

Conclusion
The cloud gives a developer control over all components from administration to assemblies. Added control comes with a price. A developer must venture into some new territory. This article provided a path to follow.

What is your opinion on cloud opportunities? Is it worth to give a trial? What is your personal experience in adopting a cloud option? Maybe you have some thoughts to share!

Polina Mikhan

Polina Mikhan
Polina.Mikhan@altabel.com
Skype ID: poly1020
Business Development Manager (LI page)
Altabel Group – Professional Software Development

The early days of video –gaming seems to be gone away. Video games companies offer their game players new graphics and playing options to get what they want and to make better choices.
So Cloud gaming seems to be one of the recent openings and growing trends in the gaming industry. Lately gamers had to choose which game platform to buy: console, PC or portable device. Until now. Thanks to cloud gaming service the gamers can play freely through the cloud on any displays, including TVs, monitors, laptops, tablets, and even smartphones.

But what actually is cloud gaming?
Cloud gaming is a form of online games that uses a cloud provider for streaming. Its means that like all online games whether it is multiplayer games, Xbox or PlayStation cloud games as well need network connection and console to be played. However instead of having a playable copy of the game you download the game itself from the cloud service and stream it instantly.

The main advantages of cloud gaming are:
1. Instantly playable games in your browser. Cloud computing games allows the game to be streamed instantly and be played in a seconds.
2. No need of any installations. All games are stored on a cloud service, so there is no need to download and install them on the hard drive.
3. No specific hardware required. Game content isn’t stored on the user’s machine and game code execution occurs primarily at the server so it allows you to run almost all modern games even on a less powerful computer. Your computer necessarily requires only the ability to play HD-video (720p) and an Internet connection at a speed of 5 Mbit / s with low latency.

The negative effects go beyond the positive benefits and features. So let’s see what they are:
1. The main disadvantage of cloud gaming at the moment is the internet. It requires a reliable and fast internet connection to stream the game and play to your TV or monitor at home. Without a decent connection, it can make games look slow and unplayable.
2. Second hand market. There is a large amount of people who buy second-hand games. Once you completed your title, people generally trade in their old game for a new one. With Cloud gaming, you never own a physical copy making the whole process of trading in your old game for a new one redundant.

Gaikai and onLive
Currently there are two growing cloud projects launched from 2009- 2010 OnLive Game Service and Gaikai,game platforms which breathed new life into video game development.

OnLive is available on different devices: TV consoles, tablets, PCs, Mac OS, smartphones. On the official web site/store www.onlive.com the games could be purchased, rented and be downloaded as a free trial as well.Besides for 100$ you can buy box OnLive Game System, by which cloud game can run even on your TV. And the games also could be played on your tablet or on your smartphone from your PC, Mac or TV via Wireless Controller OnLive for the cost of 50$.

OnLive also provides worldwide interactive playing it means that you could share your playing with other players on the spectating Arena, share your best video moments instantly on Facebook or talk with the players with Voice Chat.

Alternatively, GAIKAI www.gaikai.com, which unlike OnLive, is a cloud-based gaming service that allows users to play high-end PC and console games via the cloud and instantly demo games and applications from a webpage or internet-connected device.Library of games from a service GAIKAI is not too big, but it has a number of popular projects that are not in OnLive, for example: FIFA 12, Bulletstorm, Crysis 2, Dead Space 2, Dragon Age 2 and others.
The benefit of Gaikai’s service is that the company isn’t limited to gaming. The company is actively soliciting streaming partners to utilize Gaikai’s infrastructure, servers, and platform.
On July 2, 2012, Sony Computer Entertainment invested $380 million USD with plans of establishing their own new cloud-based gaming service.

Betting on the future?
Is cloud gaming the future? The media companies like Sony, Gaikai and OnLive think certainly so, as they invest in its development and promotion. At the same time the gamers are still doubtful on the game quality and prefer playing on consoles than on cloud. The main problems/uncertainties that gamers point are mostly connected to the buying habit and staying online playing. The question with the internet connection seemed to be decided with cable providers like AT&T, Verizon, Time Warner, and Comcast that are planning to enter the cloud-gaming space, debuting their services as early as next year.Last thing needs to overcome is the dependence of physical owning.

So maybe if these downsides could be materialized in the benefits it will help to point the biggest skeptics out, and make them believers.

Thank you for your attention and feel free to leave your comments and share your thoughts/experience at this point!

Best regards,
Katerina Bulavskaya
Altabel Group – Professional Software Development


%d bloggers like this: