Posts Tagged ‘Cloud computing’
The infrastructure-as-a-service (IaaS) market has exploded in recent years. Google stepped into the fold of IaaS providers, somewhat under the radar. The Google Cloud Platform is a group of cloud computing tools for developers to build and host web applications.
It started with services such as the Google App Engine and quickly evolved to include many other tools and services. While the Google Cloud Platform was initially met with criticism of its lack of support for some key programming languages, it has added new features and support that make it a contender in the space.
Here’s what you need to know about the Google Cloud Platform.
Google recently shifted its pricing model to include sustained-use discounts and per-minute billing. Billings starts with a 10-minute minimum and bills per minute for the following time. Sustained-use discounts begin after a particular instance is used for more than 25% of a month. Users receive a discount for each incremental minute used after they reach the 25% mark.
2. Cloud Debugger
The Cloud Debugger gives developers the option to assess and debug code in production. Developers can set a watchpoint on a line of code, and any time a server request hits that line of code, they will get all of the variables and parameters of that code. According to Google blog post, there is no overhead to run it and “when a watchpoint is hit very little noticeable performance impact is seen by your users.”
3. Cloud Trace
Cloud Trace lets you quickly figure out what is causing a performance bottleneck and fix it. The base value add is that it shows you how much time your product is spending processing certain requests. Users can also get a report that compares performances across releases.
4. Cloud Save
The Cloud Save API was announced at the 2014 Google I/O developers conference by Greg DeMichillie, the director of product management on the Google Cloud Platform. Cloud Save is a feature that lets you “save and retrieve per user information.” It also allows cloud-stored data to be synchronized across devices.
The Cloud Platform offers two hosting options: the App Engine, which is their Platform-as-a-Service and Compute Engine as an Infrastructure-as-a-Service. In the standard App Engine hosting environment, Google manages all of the components outside of your application code.
The Cloud Platform also offers managed VM environments that blend the auto-management of App Engine, with the flexibility of Compute Engine VMs.The managed VM environment also gives users the ability to add third-party frameworks and libraries to their applications.
Google Cloud Platform networking tools and services are all based on Andromeda, Google’s network virtualization stack. Having access to the full stack allows Google to create end-to-end solutions without compromising functionality based on available insertion points or existing software.
According to a Google blog post, “Andromeda is a Software Defined Networking (SDN)-based substrate for our network virtualization efforts. It is the orchestration point for provisioning, configuring, and managing virtual networks and in-network packet processing.”
Containers are especially useful in a PaaS situation because they assist in speeding deployment and scaling apps. For those looking for container management in regards to virtualization on the Cloud Platform, Google offers its open source container scheduler known as Kubernetes. Think of it as a Container-as-a-Service solution, providing management for Docker containers.
8. Big Data
The Google Cloud Platform offers a full big data solution, but there are two unique tools for big data processing and analysis on Google Cloud Platform. First, BigQuery allows users to run SQL-like queries on terabytes of data. Plus, you can load your data in bulk directly from your Google Cloud Storage.
The second tool is Google Cloud Dataflow. Also announced at I/O, Google Cloud Dataflow allows you to create, monitor, and glean insights from a data processing pipeline. It evolved from Google’s MapReduce.
Google does routine testing and regularly send patches, but it also sets all virtual machines to live migrate away from maintenance as it is being performed.
“Compute Engine automatically migrates your running instance. The migration process will impact guest performance to some degree but your instance remains online throughout the migration process. The exact guest performance impact and duration depend on many factors, but it is expected most applications and workloads will not notice,” the Google developer website said.
VMs can also be set to shut down cleanly and reopen away from the maintenance event.
10. Load balancing
In June, Google announced the Cloud Platform HTTP Load Balancing to balance the traffic of multiple compute instances across different geographic regions.
“It uses network proximity and backend capacity information to optimize the path between your users and your instances, and improves latency by connecting users to the closest Cloud Platform location. If your instances in one region are under heavy load or become unreachable, HTTP load balancing intelligently directs new requests to your available instances in a nearby region,” a Google blog post said.
Taken from TechRepublic
Skype ID: lina_deveikyte
Marketing Manager (LI page)
Altabel Group – Professional Software Development
Software-defined networking (SDN) is a hot, much debated topic and although still in its infancy, it offers the potential to transform how complex networks work. But don’t be fooled into thinking it’s only yet more industry hype, the era of Software Defined Everything is already upon us. Software is being applied to everything from servers, storage, data centres, right through to arguably the most ground-breaking piece of the jigsaw – the Wide Area Network.
SDN changes the way companies build their IT environments by essentially moving the “control plane” of the network away from each individual device in the network to a central controller that works with all the devices, both virtual and physical. This allows for a single controller to configure or manage the complete network, as opposed to each device managing its own functionality and being programmed individually. The technology has huge benefits for businesses, including reducing IT expenditure and enabling changes to the network quickly and easily.
The importance of the network
SDN deployments are still very limited and at their early stages of development. This is due in part to the fact that today’s corporate networks use open standards such as the IP protocol and Ethernet connectivity, but configuring the networks themselves often requires lots of manual tasks because each device on the network has separate policies and consoles. Making significant changes in the network – even with existing hardware – can be time-consuming, potentially taking a week or two. With the move towards server virtualisation and cloud computing, this has become even more complex.
With this in mind, it is no surprise that SDN is making its way to centre stage. SDN is being tackled from all sides of the ecosystem, from virtualisation vendors like VMWare to the traditional networking providers like Cisco. Not only is it going to fundamentally change the business models of the networking and server industries, but it is also going to escalate the importance of the network.
The value that SDN poses for businesses is immense. It holds greater potential for productivity increases from IT than any other development because of the way it acts as a unifying force between disparate elements – computing, networking, virtualisation, information, and business logic. There’s no doubt that SDN will be a disruptive force across cloud, carrier and enterprise networks, likely in that order. The natural progression of turning hardware into software will result in re-architected networks, data centres and infrastructures.
What the future holds
The integration of everything into the network will become a no-brainer in the coming year and this will essentially transform the network into the epicenter of ICT services. While no one can predict the SDN end-game, we are at the cusp of a revolution in the way global networks are designed, built, and managed.
By providing more real-time intelligence and deep application integration SDN is going to enable enterprises to realise innovation earlier with applications rolled out in hours instead of weeks. Organisations will achieve never-before-seen levels of agility while reducing both capital and operational overhead to the lowest levels ever delivered in enterprise solutions.
As a platform, SDN provides the potential to drive the next generation of IT services. Early high visibility adopters like Google and the recent significant increase in VC funding into the SDN area is fuelling momentum and the emergence of the era of Software Defined Everything looks set to change the power of the network for good. Organisations should be looking very seriously at how SDN can benefit their businesses before their competitors get there first.
The early days of video –gaming seems to be gone away. Video games companies offer their game players new graphics and playing options to get what they want and to make better choices.
So Cloud gaming seems to be one of the recent openings and growing trends in the gaming industry. Lately gamers had to choose which game platform to buy: console, PC or portable device. Until now. Thanks to cloud gaming service the gamers can play freely through the cloud on any displays, including TVs, monitors, laptops, tablets, and even smartphones.
But what actually is cloud gaming?
Cloud gaming is a form of online games that uses a cloud provider for streaming. Its means that like all online games whether it is multiplayer games, Xbox or PlayStation cloud games as well need network connection and console to be played. However instead of having a playable copy of the game you download the game itself from the cloud service and stream it instantly.
The main advantages of cloud gaming are:
1. Instantly playable games in your browser. Cloud computing games allows the game to be streamed instantly and be played in a seconds.
2. No need of any installations. All games are stored on a cloud service, so there is no need to download and install them on the hard drive.
3. No specific hardware required. Game content isn’t stored on the user’s machine and game code execution occurs primarily at the server so it allows you to run almost all modern games even on a less powerful computer. Your computer necessarily requires only the ability to play HD-video (720p) and an Internet connection at a speed of 5 Mbit / s with low latency.
The negative effects go beyond the positive benefits and features. So let’s see what they are:
1. The main disadvantage of cloud gaming at the moment is the internet. It requires a reliable and fast internet connection to stream the game and play to your TV or monitor at home. Without a decent connection, it can make games look slow and unplayable.
2. Second hand market. There is a large amount of people who buy second-hand games. Once you completed your title, people generally trade in their old game for a new one. With Cloud gaming, you never own a physical copy making the whole process of trading in your old game for a new one redundant.
Gaikai and onLive
Currently there are two growing cloud projects launched from 2009- 2010 OnLive Game Service and Gaikai,game platforms which breathed new life into video game development.
OnLive is available on different devices: TV consoles, tablets, PCs, Mac OS, smartphones. On the official web site/store www.onlive.com the games could be purchased, rented and be downloaded as a free trial as well.Besides for 100$ you can buy box OnLive Game System, by which cloud game can run even on your TV. And the games also could be played on your tablet or on your smartphone from your PC, Mac or TV via Wireless Controller OnLive for the cost of 50$.
OnLive also provides worldwide interactive playing it means that you could share your playing with other players on the spectating Arena, share your best video moments instantly on Facebook or talk with the players with Voice Chat.
Alternatively, GAIKAI www.gaikai.com, which unlike OnLive, is a cloud-based gaming service that allows users to play high-end PC and console games via the cloud and instantly demo games and applications from a webpage or internet-connected device.Library of games from a service GAIKAI is not too big, but it has a number of popular projects that are not in OnLive, for example: FIFA 12, Bulletstorm, Crysis 2, Dead Space 2, Dragon Age 2 and others.
The benefit of Gaikai’s service is that the company isn’t limited to gaming. The company is actively soliciting streaming partners to utilize Gaikai’s infrastructure, servers, and platform.
On July 2, 2012, Sony Computer Entertainment invested $380 million USD with plans of establishing their own new cloud-based gaming service.
Betting on the future?
Is cloud gaming the future? The media companies like Sony, Gaikai and OnLive think certainly so, as they invest in its development and promotion. At the same time the gamers are still doubtful on the game quality and prefer playing on consoles than on cloud. The main problems/uncertainties that gamers point are mostly connected to the buying habit and staying online playing. The question with the internet connection seemed to be decided with cable providers like AT&T, Verizon, Time Warner, and Comcast that are planning to enter the cloud-gaming space, debuting their services as early as next year.Last thing needs to overcome is the dependence of physical owning.
So maybe if these downsides could be materialized in the benefits it will help to point the biggest skeptics out, and make them believers.
Thank you for your attention and feel free to leave your comments and share your thoughts/experience at this point!
Not long ago I had to answer one very hard question, at least to try finding the answer:-):What is better Windows Azure or Amazon Web Services (AWS) for cloud computing? Why can no other providers fit the bill? – may you ask, the thing is that most of them can’t provide the price points or size that Amazon or Azure can provide. In general , building up the operational capability to provide a service like AWS or Azure is a difficult proposition. Both AWS and Azure provide multiple locations and pay-by-the-hour capability. That’s actually really hard to do without massive capital behind it.
As it`s a long live debate(I mean Amazon vs Azure) it was really hard to find the answer. I`ve consulted with our technical specialists, googled this problem, asked Linkedin and Xing members to share their opinions on this question conducted some polls, etc. As you could imagine, there was no definite answer and a cure-all pill and the opinions differed.
Both AWS and Windows Azure are quite young: AWS “was born ” in 2006 and WA in 2008 however there services are used by the world-known corporations as NASA, Ericsson, Boeing, Xerox, etc and they both are considered to be the leaders of the today`s cloud sphere. The two platforms are very alike and have quite the same characteristics.
Among the two gorillas in the cloud space, some developers prefer Amazon, others think Azure is the best, but often the details are sparse as to why one option is better than the other. Among the reasons for choosing AWS/Azure, and not choosing the other they name cost, available resources, development tools and ecosystem.
Also to a great extent language support matters. AWS is platform agnostic while Azure is windows based. Getting stuck in a single framework like .NET where there is only one “provider” for .NET tools can be a huge hindrance in any future decisions you make as a company.
Microsoft (and Azure as default) seems to be all about lock-in. Lock-in on the operating system, lock-in on the language platform, as well as lock-in on the Azure services. Also, many companies do have to solve big compute problems that Java, unlike .NET, is well positioned for. While many larger companies don’t have to be as concerned with lock-in — this is a very scary thought for most start-ups that need a clearer longer-term cost structure.
Mostly the real choice is between IaaS and PaaS. Azure = PaaS and AWS is special in that it provides both IaaS and PaaS. One of the advantages of the AWS is that from AWS you can get the platform capabilities and the freedom to easily deploy brand new technologies before they become part of the platform. So people need to decide what is more important for them and how important cutting-edge is.
And what do you prefer personally: Amazon Web Services or Windows Azure? Feel free to share your opinions and considerations.
Microsoft has officially rolled out Windows Server 2012, the server partner to the Windows 8 operating system it is launching on 6 October alongside the eagerly anticipated Surface tablet.
Unveiling the new offering on the 4th of September, Satya Nadella, president of Redmond’s Servers and Tools Business, has dubbed the new-gen system as the first “cloud OS.” In his keynote speech, Nadella described how Windows Server 2012 is a cornerstone of the Cloud OS, which provides one consistent platform across private, hosted and public clouds.
Windows Server 2012 is seen as a central part of Microsoft’s new enterprise ecosystem, which also features Windows Azure and System Center 2012 for customers to manage and deliver applications and services across private, hosted and public clouds.
The Microsoft Cloud OS provides enterprises with a highly elastic and scalable infrastructure with always-on, always-up services. Automated management, robust multitenant support, and self-service provisioning help enterprises transform their datacenters to support the coordination and management of pooled sets of shared resources at the datacenter level, replacing fragmented management of individual server nodes.
The new operating system provides a comprehensive set of capabilities across the enterprise private cloud datacenter, and public cloud datacenters.
• Agile Development Platform: The Microsoft Cloud OS allows enterprises to build applications they need using the tools they know, including Microsoft Visual Studio and .NET, or open-source technologies and languages, such as REST, JSON, PHP, and Java.
• Unified DevOps and Management: The Microsoft Cloud OS supports unified DevOps and unified application life-cycle management with common application frameworks across development and operations. With Microsoft System Center integration with development environments such as Visual Studio, enterprises can achieve quick time-to-solution and easy application troubleshooting and management.
• Common Identity: The Microsoft Cloud OS implements Active Directory as a powerful asset across environments to help enterprises extend to the cloud with Internet scale security using a single identity and to securely extend applications and data to devices.
• Integrated Virtualization: To help enterprises achieve the modern datacenter, the Microsoft Cloud OS includes an infrastructure which provides a generational leap in agility, leveraging virtualization to deliver a highly scalable and elastic infrastructure with always-on, always-up services across shared resources and supporting cloud service delivery models with more automated management and self-service provisioning. With Windows Server 2012, the Microsoft Cloud OS is engineered for the cloud from the metal up with virtualization built as an integrated element of the operating system, not layered onto the operating system.
• Complete Data Platform: The Microsoft Cloud OS fully supports large volumes of diverse data, advanced analytics, and enterprise BI life-cycle management, with a comprehensive set of technologies to manage petabytes of data in the cloud, to millions of transactions for the most mission-critical applications, to billions of rows of data in the hands of end users for predictive and ad-hoc analytics.
At the core of the Microsoft Cloud OS is Windows Server 2012. The software supports 320 logical processors and 4TB of physical memory per server, with 64 virtual processors per virtual machine. Virtual disks can scale up to 64TB apiece, according to the firm, or 32 times what it said the competition can offer at the moment, adding that Server 2012 is capable of virtualising 99 per cent of all SQL databases.
New features of Windows Server 2012 include a refreshed version of Hyper-V, including expanded network visualization capabilities to run multiple configurations on the same LAN. Also debuting is a new Resilient File System (ReFS), which improves reliability.
Appearance wise, Windows Server 2012 is built in the Modern UI-style, featuring a tile-based interface like that of Windows 8 and Windows RT.
Microsoft officials say that launch of Windows Server 2012 is perhaps the biggest release of their server products in history, bigger than NT. They also believe that Windows Server 2012 ushers in the era of the cloud operating system.
Do you believe the release of Windows Server 2012 is a breakthrough in the server industry? And do you think Microsoft Cloud OS will be the winner in the competition among emerging Cloud operating systems?
Thanks for sharing your opinion :)
The IT world continues to sprint forward at an unrelenting pace and these are its five hottest trends so far in 2012. Let’s count them down.
5. The projectization of IT
Projects have always been a major part of IT, but in the past there were also a lot of IT resources dedicated to keeping the lights on and keeping the world running. Companies now take those operational aspects of IT for granted and want that existing infrastructure automated as much as possible and for as cheaply as possible. There’s little glory or job security in keeping the company’s existing systems on life support.
That’s why outsourcing and the cloud are such hot commodities. They allow companies to offload IT operational costs and focus their IT staff on the next project to upgrade systems, streamline business processes, and launch new IT projects to transform the business. More than ever, IT is all about the projects. It’s about the vendors that can help support IT projects (and there are infrastructure jobs for IT pros there). It’s about the business analysts and project managers who can organize people and resources to pull off projects on time and on budget. It’s about the CIOs who now base their budget and staffing decisions largely on projects rather than just the cost of keeping the server room running.
4. PC/Mobile convergence
Employees are more mobile than ever. There are a lot of factors driving that, from increased telecommuting to work/life balance where people leave early to pick up their kids and then work the rest of the afternoon from a cafe or the stands at the soccer field. There are also industries such as transportation and health care that have always had lots of non-desk employees and have had to shoe-horn computing solutions into their work environments.
The growing capabilities of smartphones and tablets are filling many of these needs as these mobile devices become more able to do the tasks of a full PC. Still, there are times when workers can be even more productive when working with a full keyboard and mouse. That’s why we’re beginning to see the rise of products like Motorola Webtop (a Smartphone docking solution), Ubuntu for Android (desktop OS embedded in a Smartphone), and Microsoft Surface (a tablet with a kickstand and keyboard cover). The lines between traditional PCs and mobile devices continue to blur.
3. Desktop thinning
Let’s be honest. The proliferation of mobile devices and the Bring Your Own Device trend has created a lot of headaches and nightmare scenarios for the IT department. For companies that need stronger security and more control over the employee environment, one of the easiest answers to the problem is to move to solutions like desktop virtualization or terminal services from vendors like VMware, Citrix, and Microsoft.
That allows the IT department to create a standard environment with all the company apps that employees can access from a company PC, their home PC or laptop (over VPN), or even a tablet or Smartphone. The environment looks and feels like a traditional PC but the apps and all the data remain on the company servers which are more secure and easier for the IT department to manage and troubleshoot. This technology has been around for years, as “thin clients.” But there are three factors driving it forward in 2012: 1.) BYOD, 2.) mobile devices, and 3.) it lets companies delay PC upgrades since it pushes all of the heavy lifting to the servers. So companies still aren’t going to thin clients in large numbers, but their desktop environments are getting a lot thinner.
2. Big Data
If “Cloud Computing” has been the overhyped and overused IT term of the last several years, the new buzz phrase of 2012 is “Big Data.” Like Cloud, Big Data gets abused by marketers. The main thing you need to understand when it comes to Big Data is that it’s about bringing together the “structured” internal data that your company has always used for its reports and mixing it with public “unstructured” data like social media streams and freely available government data (on traffic, agriculture, crime, etc.).
The act of combining these two types of data can give you new insights into how your customers feel about your products versus your competitors (from the social media streams), it can help you anticipate changes in product demand, it can help you use government trending data to anticipate the growth or decline of markets, and more. That’s why Big Data is such a big deal. But, don’t be fooled. It’s still in its infancy. There aren’t a ton of great commercial tools yet that can help you harness Big Data. It takes the right IT pros who know how to work some data magic and they are in high demand.
1. Cloud, cloud, and cloud
There are essentially three types of clouds — the full Internet cloud (some call it the “public cloud”), the private cloud (which looks a lot like a traditional data center, but with lots of virtualization), and the hybrid cloud (an integrated mix of public and private clouds). Make no mistake; all three types of clouds are thriving in 2012. The public cloud is the one that most people think of when they hear “cloud” and it’s mostly about hosted apps like Salesforce.com and Workday.com as well as Internet-hosted infrastructure like Rackspace and Amazon AWS. But, we’re increasingly seeing traditional IT players like Microsoft, IBM, and HP quietly become big players in the cloud as well.
The private cloud and the hybrid cloud are for larger companies and organizations that need stronger security or have legacy apps that are not easily moved or migrated to the cloud. Both of these types of cloud solutions are picking up steam, especially in companies that have already moved their easy stuff to the cloud and are now digging in and dealing with some of the big, expensive, entrenched stuff.
What are the hottest IT trends in your world so far in 2012?
It seems most companies understand opportunities that cloud computing solutions and services open up for them, especially for SMBs. So now the question sounds like: how to choose a good provider and the right one for your company and to what extend cloud computing services should be used. The complexities are numerous – issues such as security management, attack response and recovery, system availability and performance, the vendor’s financial stability and its ability to comply with the law, all need to be considered. There may be a number of advice and tips formulated with this regards (some are taken from CIO article):
1) Choose trusted providers. Today it exists a number of cloud tech companies to choose from and new ones go live each month. Despite this for cloud services it’s better to stick with trusted and solid companies. To name a few: Microsoft, Google, Intuit, Dropbox, Apple, Amazon, Salesforce. These are companies with deep pockets and dealing with security, and your data is an important part of their business.
2) Distribute between free and paid accounts. For storing financial or alike information paid accounts are preferable. For less critical data and applications free accounts of big trusted cloud service providers may work well. For instance, Google can afford to offer decent free accounts because their business is well-established and their free services just act as bait aimed at attracting new users and then gently pushing them towards paid services and premium accounts.
3) Select the right apps and data for the public cloud. Some businesses, mainly start-up companies, begin using the public cloud for all applications, including mission-critical apps and their data. However, public clouds are neither for every organization nor for every application: what can be subject to the default security provided by most cloud service providers are websites, application development, testing, online product catalogs and product documentation.
4) Evaluate and add security if it makes sense. CSPs can provide significantly different levels of public cloud security. The ISO/IEC 27000 series of standards provides guidelines for evaluating this. If necessary security measures that are used in an organization’s internal private cloud may need to be extended to their public cloud instances, and some cloud products like CloudSpan allow doing this.
5) Get use of the third-party auditing services. When comes to security compliance, organizations need not simply take the CSP’s word for it. Third-party auditing services can be used to audit and then compare to the promised ones.
6) Add authentication layers. Most CSPs provide good authentication services for public cloud instances. Some products like Halo NetSec can help add an additional layer of authentication. Before doing this you need to weigh the benefits of better public cloud security against the costs of increased network latency, possible performance degradation and additional points of failure.
7) Weigh additional security effect on integration. Adding on top of default security by CSP may affect overall application performance and identity and access management. It’s especially important to consider if you work with mission-critical application that need to integrate with other business applications.
8) Make security guarantees from SLA clear for yourself. Public cloud security guarantees with CSPs should be clearly stipulated as service level agreements in the contract, so make sure that transparent monitoring and reporting functions are available to you as a customer as well as security processes, procedures and practices are transparent and verifiable so that you may rely on this information.
9) Streamline logging and monitoring. Comparing one CSP’s logging and monitoring practices with another before you sign a SLA may reveal subtle differences in the security that’s provided so it’s another key to ensuring public cloud security.
10) Add encryption. You may want to employ your own encryption instead of or in addition to the ones provided by the CSP. A number of installable products or SaaS vendors can do this type of encryption on the fly. (VPN-enabled cloud instances fall under this category of augmented public cloud security.) When this happens, only the customer and the third party know the key; the CSP does not.
11) Spread outages risk with multiple even redundant CSPs. Despite cloud provisioning tools these days come already integrated with leading CSPs, it’s possible to spin up additional instances of servers with multiple CSPs automatically on demand: they are turned on if average CPU utilization reaches a certain threshold and turned off once utilization drops. Also when spinning up additional instances, it may make sense to use different CSPs in a round-robin fashion.
Thus, as you may see, experience of using cloud services may be adjusted and improved through following some advice. What’s crucial is finding a balance between cloud security and performance. Naturally there’s always a tradeoff when adding layers of security may be at the expense of application running slower and potentially adding points of failure. Figuring out the right balance between security and performance, though being difficult, is a must-have to run a strong business today.
Helen Boyarchuk – Business Development Manager (LI page)
Helen.Boyarchuk@altabel.com | Skype ID: helen_boyarchuk
Altabel Group – Professional Software Development