Posts Tagged ‘Cloud’
The infrastructure-as-a-service (IaaS) market has exploded in recent years. Google stepped into the fold of IaaS providers, somewhat under the radar. The Google Cloud Platform is a group of cloud computing tools for developers to build and host web applications.
It started with services such as the Google App Engine and quickly evolved to include many other tools and services. While the Google Cloud Platform was initially met with criticism of its lack of support for some key programming languages, it has added new features and support that make it a contender in the space.
Here’s what you need to know about the Google Cloud Platform.
Google recently shifted its pricing model to include sustained-use discounts and per-minute billing. Billings starts with a 10-minute minimum and bills per minute for the following time. Sustained-use discounts begin after a particular instance is used for more than 25% of a month. Users receive a discount for each incremental minute used after they reach the 25% mark.
2. Cloud Debugger
The Cloud Debugger gives developers the option to assess and debug code in production. Developers can set a watchpoint on a line of code, and any time a server request hits that line of code, they will get all of the variables and parameters of that code. According to Google blog post, there is no overhead to run it and “when a watchpoint is hit very little noticeable performance impact is seen by your users.”
3. Cloud Trace
Cloud Trace lets you quickly figure out what is causing a performance bottleneck and fix it. The base value add is that it shows you how much time your product is spending processing certain requests. Users can also get a report that compares performances across releases.
4. Cloud Save
The Cloud Save API was announced at the 2014 Google I/O developers conference by Greg DeMichillie, the director of product management on the Google Cloud Platform. Cloud Save is a feature that lets you “save and retrieve per user information.” It also allows cloud-stored data to be synchronized across devices.
The Cloud Platform offers two hosting options: the App Engine, which is their Platform-as-a-Service and Compute Engine as an Infrastructure-as-a-Service. In the standard App Engine hosting environment, Google manages all of the components outside of your application code.
The Cloud Platform also offers managed VM environments that blend the auto-management of App Engine, with the flexibility of Compute Engine VMs.The managed VM environment also gives users the ability to add third-party frameworks and libraries to their applications.
Google Cloud Platform networking tools and services are all based on Andromeda, Google’s network virtualization stack. Having access to the full stack allows Google to create end-to-end solutions without compromising functionality based on available insertion points or existing software.
According to a Google blog post, “Andromeda is a Software Defined Networking (SDN)-based substrate for our network virtualization efforts. It is the orchestration point for provisioning, configuring, and managing virtual networks and in-network packet processing.”
Containers are especially useful in a PaaS situation because they assist in speeding deployment and scaling apps. For those looking for container management in regards to virtualization on the Cloud Platform, Google offers its open source container scheduler known as Kubernetes. Think of it as a Container-as-a-Service solution, providing management for Docker containers.
8. Big Data
The Google Cloud Platform offers a full big data solution, but there are two unique tools for big data processing and analysis on Google Cloud Platform. First, BigQuery allows users to run SQL-like queries on terabytes of data. Plus, you can load your data in bulk directly from your Google Cloud Storage.
The second tool is Google Cloud Dataflow. Also announced at I/O, Google Cloud Dataflow allows you to create, monitor, and glean insights from a data processing pipeline. It evolved from Google’s MapReduce.
Google does routine testing and regularly send patches, but it also sets all virtual machines to live migrate away from maintenance as it is being performed.
“Compute Engine automatically migrates your running instance. The migration process will impact guest performance to some degree but your instance remains online throughout the migration process. The exact guest performance impact and duration depend on many factors, but it is expected most applications and workloads will not notice,” the Google developer website said.
VMs can also be set to shut down cleanly and reopen away from the maintenance event.
10. Load balancing
In June, Google announced the Cloud Platform HTTP Load Balancing to balance the traffic of multiple compute instances across different geographic regions.
“It uses network proximity and backend capacity information to optimize the path between your users and your instances, and improves latency by connecting users to the closest Cloud Platform location. If your instances in one region are under heavy load or become unreachable, HTTP load balancing intelligently directs new requests to your available instances in a nearby region,” a Google blog post said.
Taken from TechRepublic
Developers are in a unique position to educate and to capitalize on cloud opportunities. Unlike learning new programming techniques or Frameworks, cloud learning moves beyond development. There are infrastructure aspects to consider as well as potential organizational process and policy changes. However, developers know the application and cloud administration is a much lower bar, than, for example network administration. If you’re looking for a strategy to follow to cloud enlightenment; you’re reading the right article.
Give the Cloud a Whirl
When it comes to the cloud, don’t wait for the storm to hit you, but rather educate yourself; there is no substitute for experimentation and hands-on experience. Start by separating reality from marketing. Almost every cloud vendor offers a free trial. For example: Microsoft Azure offers a free trial. If you are truly new to cloud development; imagine borrowing a company server for 3 months; only there is no setup time. Just turn in on and away you go.
Given that experimentation time is limited; go for breadth rather than depth. Get a taste of everything. What most developers find is; after some initial orientation and learning the experience becomes what they already know. For example: Azure has an ASP.NET based hosting model called Web Roles. After configuring and learning Web Role instrumentation, the development experience is ASP.NET. Learning Azure Web Roles amounts to learning some new administration and configuration skills; coupled with a handful of new classes. The rest of what you need to know is nothing new if you’ve done ASP.NET!
Developer must keep their time constrained. Struggling for hours with something new is often not worth the effort. One should question wide adoption of something that will be difficult to work with. Cloud offerings are typically not niche or differentiating skills like, for example, SQL Server tuning.
Whatever cloud option a developer starts with; understand the authentication options. Intranet developers typically take authentication for granted. ASP.NET makes authentication look easy. Consider all the moving parts involved in making authentication automatic and secure. Understanding authentication is especially important if parts of an application will live within the organization’s datacenter and within the cloud provider.
Finally, look for the right opportunities to apply these new skills.
Navigating the Fog
Most developers are adept at picking when to jump on new technology and when to pull back. Unlike adopting, for example, a new Web Services approach; adopting a cloud option entails learning a little more administration. The cloud can give a developer total control, but the cost is learning a bit more administration.
Developers may find themselves in new territory here. Typically a “hardware person” selects a machine and a “network person” selects and configures a firewall. Cloud portals make network and server configuration easier, but the portal doesn’t eliminate the configuration role. The public cloud handles the hardware; but the developer must choose, for example, how many; CPUs, servers, and load balancers will be needed. This lowers the administration bar, but also might place the burden on the developer.
The cloud will not be the right option for every project. Give the cloud a fair chance. Decision makers may have two reactions to cloud; outright rejection or wild-eyed embrace. Neither reaction is healthy. There is middle-ground. Don’t let unrealistic expectations set by marketing brochures guide the first project. A developer’s experiences described earlier in the article will be helpful here. Set the bar low. Make the first experience a good experience.
Supplementing with the Cloud
One potential approach is to supplement with the cloud. Let the cloud handle some part of the application. For example: requirements may dictate a web page to handle user registration. Registrations often have deadlines and, given human nature, people often procrastinate. Registration traffic is likely to spike the week or a few days before the deadline. Rather than purchasing servers to accommodate the spike; leaving usage idle for most of the year, do registration in the cloud. Dial up more servers the week before registrations are due and dial the server could back down the week after registrations are due.
Aside from technical change; cloud adoption may require organizational change.
Clouds Don’t Work in a Vacuum
I would bet good money that most developers reading this article have no idea which ports in their organization are closed to incoming TCP/IP connections. However knowing who to ask is far more important than what is known. In some sense every organization is its own private cloud. Networking professionals have been connecting things together longer than developers. Internet performance is considerably different than Intranet performance. Cultivate relationships with whoever operates your Firewall.
Passing through a Firewall is overhead. Your organization’s infrastructure may not be cloud ready. Though if your network people banter about DMZs; chances are your organization’s infrastructure is probably cloud ready. As stated earlier authentication is important to cover; forcing users to authenticate multiple times within an application is intolerable to most users.
Budgeting for servers may be different than budgeting for compute cycles. There may be concern over whether compute cycles will amount to more than purchasing a server or two. There is no shortcut here. Just like any other budgeting a developer must do the math. Again, this may be new territory for developers. Typically developers aren’t asked how much storage an application requires. Typically the storage cost is spread throughout the projects an organization conducts. Budgeting difficulties may be a good reason not to do a project. The upside is; after doing the math a developer will likely find that costs are far below buying the hardware.
The cloud gives a developer control over all components from administration to assemblies. Added control comes with a price. A developer must venture into some new territory. This article provided a path to follow.
What is your opinion on cloud opportunities? Is it worth to give a trial? What is your personal experience in adopting a cloud option? Maybe you have some thoughts to share!
It seems most companies understand opportunities that cloud computing solutions and services open up for them, especially for SMBs. So now the question sounds like: how to choose a good provider and the right one for your company and to what extend cloud computing services should be used. The complexities are numerous – issues such as security management, attack response and recovery, system availability and performance, the vendor’s financial stability and its ability to comply with the law, all need to be considered. There may be a number of advice and tips formulated with this regards (some are taken from CIO article):
1) Choose trusted providers. Today it exists a number of cloud tech companies to choose from and new ones go live each month. Despite this for cloud services it’s better to stick with trusted and solid companies. To name a few: Microsoft, Google, Intuit, Dropbox, Apple, Amazon, Salesforce. These are companies with deep pockets and dealing with security, and your data is an important part of their business.
2) Distribute between free and paid accounts. For storing financial or alike information paid accounts are preferable. For less critical data and applications free accounts of big trusted cloud service providers may work well. For instance, Google can afford to offer decent free accounts because their business is well-established and their free services just act as bait aimed at attracting new users and then gently pushing them towards paid services and premium accounts.
3) Select the right apps and data for the public cloud. Some businesses, mainly start-up companies, begin using the public cloud for all applications, including mission-critical apps and their data. However, public clouds are neither for every organization nor for every application: what can be subject to the default security provided by most cloud service providers are websites, application development, testing, online product catalogs and product documentation.
4) Evaluate and add security if it makes sense. CSPs can provide significantly different levels of public cloud security. The ISO/IEC 27000 series of standards provides guidelines for evaluating this. If necessary security measures that are used in an organization’s internal private cloud may need to be extended to their public cloud instances, and some cloud products like CloudSpan allow doing this.
5) Get use of the third-party auditing services. When comes to security compliance, organizations need not simply take the CSP’s word for it. Third-party auditing services can be used to audit and then compare to the promised ones.
6) Add authentication layers. Most CSPs provide good authentication services for public cloud instances. Some products like Halo NetSec can help add an additional layer of authentication. Before doing this you need to weigh the benefits of better public cloud security against the costs of increased network latency, possible performance degradation and additional points of failure.
7) Weigh additional security effect on integration. Adding on top of default security by CSP may affect overall application performance and identity and access management. It’s especially important to consider if you work with mission-critical application that need to integrate with other business applications.
8) Make security guarantees from SLA clear for yourself. Public cloud security guarantees with CSPs should be clearly stipulated as service level agreements in the contract, so make sure that transparent monitoring and reporting functions are available to you as a customer as well as security processes, procedures and practices are transparent and verifiable so that you may rely on this information.
9) Streamline logging and monitoring. Comparing one CSP’s logging and monitoring practices with another before you sign a SLA may reveal subtle differences in the security that’s provided so it’s another key to ensuring public cloud security.
10) Add encryption. You may want to employ your own encryption instead of or in addition to the ones provided by the CSP. A number of installable products or SaaS vendors can do this type of encryption on the fly. (VPN-enabled cloud instances fall under this category of augmented public cloud security.) When this happens, only the customer and the third party know the key; the CSP does not.
11) Spread outages risk with multiple even redundant CSPs. Despite cloud provisioning tools these days come already integrated with leading CSPs, it’s possible to spin up additional instances of servers with multiple CSPs automatically on demand: they are turned on if average CPU utilization reaches a certain threshold and turned off once utilization drops. Also when spinning up additional instances, it may make sense to use different CSPs in a round-robin fashion.
Thus, as you may see, experience of using cloud services may be adjusted and improved through following some advice. What’s crucial is finding a balance between cloud security and performance. Naturally there’s always a tradeoff when adding layers of security may be at the expense of application running slower and potentially adding points of failure. Figuring out the right balance between security and performance, though being difficult, is a must-have to run a strong business today.
Helen Boyarchuk – Business Development Manager (LI page)
Helen.Boyarchuk@altabel.com | Skype ID: helen_boyarchuk
Altabel Group – Professional Software Development
There is no doubt that 2012 will be another big year for BI and information management. In the article we`ve tried to gather what we suppose are the top BI trends for near future
Big Data → Need for Speed
The rise in volume (amount of data), velocity (speed of data) and variety (range of data) gives way to new architectures that no longer only collect and store but actually use data: on-demand or real-time BI architectures will replaces traditional datawarehouses. Successful business intelligence projects will need to consider Big Data as part of their data landscape for the value that it delivers. More and more organizations will look toward statistics and data mining to set strategic direction and gain greater insights to stay ahead of the pack.At the same time the BI user is expecting faster answers from their BI environment disregarding the fact that the size of data is increasing.
Shift from analytical BI to operational BI
Increased adoption of cloud and mobile BI encourage individuals to access their KPI dashboards (key performance indicators), more often. An operational dashboard works much like a car’s dashboard. As you drive, you monitor metrics that indicate the current performance of your vehicle and make adjustments accordingly. When the speed-limit changes, you check your speedometer and slow down, or when you see you are out of gas you pull over and fill-up. Likewise, an operational dashboard allows you to make tactical decisions based on current performance, whether it is chasing a red-hot lead or ordering an out-of-stock product.
Latest surveys showed that only 25% of employees in businesses that adopted BI had access to that tool. And that is not because they didn`t want to or didn`t need information, but because traditional BI tools have been too bulky and technical for that other 75% of employees to use.
As now organizations more and more are adopting cloud and mobile BI dashboards, this situation is likely to change. Business intelligence is heading towards simpler, more straightforward methods and tools..
An Agile approach can be used to incrementally remove operational costs and if deployed correctly, can return great benefits to any organization. Agile provides a streamlined framework for building business intelligence/data warehousing (BIDW) applications that regularly delivers faster results using just a quarter of the developer hours of a traditional waterfall approach.
It allows you to start a project after doing 20 per cent of the requirements and design that deliver 80 per cent of the project’s value. The remaining details are filled in once development is underway and everyone has a good look at what the challenges actually are.
BI going mobile
In a survey conducted by Gartner, it was found that by 2013 one-third of all BI usage will be on a mobile device, such as a smart-phone or tablet. BI users want to access their data anytime and anywhere. This puts a demand on both the backend of any BI solution (like datawarehouse appliances) but also on the frontend where information access and visualization must be possible.
BI going up to the Cloud
As Cloud computing continues to dominate the whole IT landscape, so BI also dominates in the Cloud . Throughout next few years adoption of cloud BI tools will be driven by a number of important factors. First, cloud-based solutions offer the advantage of being relatively simple and convenient to deploy. Second, cloud tools are more easily scalable to provide access to key performance indicators (KPIs) to everyone in your organization, no matter where they are or what device they are using. Lastly, continually improving security measures will put to rest any reservations businesses have with storing their sensitive data in the cloud.
We believe these above enumerated areas will grow over the next few years. Organizations will embrace the Agile approach, utilizing new tools and technologies to decrease delivery times and demonstrate substantial business value. As we put more data into the Cloud, big data will become standard. Data itself will be delivered to satisfy the desires of users, so access from mobile devices will dominate desk-based consumption. The businesses that embrace these new business intelligence trends, and take steps to change and adapt the way data is hosted, analyzed, utilized and delivered, will be the ones that grow and prosper in the near future.
And what are your predictions for the big business intelligence trends in the next few years? Do you agree/disagree with our predictions?