Altabel Group's Blog

Posts Tagged ‘Cloud

The infrastructure-as-a-service (IaaS) market has exploded in recent years. Google stepped into the fold of IaaS providers, somewhat under the radar. The Google Cloud Platform is a group of cloud computing tools for developers to build and host web applications.

It started with services such as the Google App Engine and quickly evolved to include many other tools and services. While the Google Cloud Platform was initially met with criticism of its lack of support for some key programming languages, it has added new features and support that make it a contender in the space.

Here’s what you need to know about the Google Cloud Platform.

1. Pricing

Google recently shifted its pricing model to include sustained-use discounts and per-minute billing. Billings starts with a 10-minute minimum and bills per minute for the following time. Sustained-use discounts begin after a particular instance is used for more than 25% of a month. Users receive a discount for each incremental minute used after they reach the 25% mark.

2. Cloud Debugger

The Cloud Debugger gives developers the option to assess and debug code in production. Developers can set a watchpoint on a line of code, and any time a server request hits that line of code, they will get all of the variables and parameters of that code. According to Google blog post, there is no overhead to run it and “when a watchpoint is hit very little noticeable performance impact is seen by your users.”

3. Cloud Trace

Cloud Trace lets you quickly figure out what is causing a performance bottleneck and fix it. The base value add is that it shows you how much time your product is spending processing certain requests. Users can also get a report that compares performances across releases.

4. Cloud Save

The Cloud Save API was announced at the 2014 Google I/O developers conference by Greg DeMichillie, the director of product management on the Google Cloud Platform. Cloud Save is a feature that lets you “save and retrieve per user information.” It also allows cloud-stored data to be synchronized across devices.

5. Hosting

The Cloud Platform offers two hosting options: the App Engine, which is their Platform-as-a-Service and Compute Engine as an Infrastructure-as-a-Service. In the standard App Engine hosting environment, Google manages all of the components outside of your application code.

The Cloud Platform also offers managed VM environments that blend the auto-management of App Engine, with the flexibility of Compute Engine VMs.The managed VM environment also gives users the ability to add third-party frameworks and libraries to their applications.

6. Andromeda

Google Cloud Platform networking tools and services are all based on Andromeda, Google’s network virtualization stack. Having access to the full stack allows Google to create end-to-end solutions without compromising functionality based on available insertion points or existing software.

According to a Google blog post, “Andromeda is a Software Defined Networking (SDN)-based substrate for our network virtualization efforts. It is the orchestration point for provisioning, configuring, and managing virtual networks and in-network packet processing.”

7. Containers

Containers are especially useful in a PaaS situation because they assist in speeding deployment and scaling apps. For those looking for container management in regards to virtualization on the Cloud Platform, Google offers its open source container scheduler known as Kubernetes. Think of it as a Container-as-a-Service solution, providing management for Docker containers.

8. Big Data

The Google Cloud Platform offers a full big data solution, but there are two unique tools for big data processing and analysis on Google Cloud Platform. First, BigQuery allows users to run SQL-like queries on terabytes of data. Plus, you can load your data in bulk directly from your Google Cloud Storage.

The second tool is Google Cloud Dataflow. Also announced at I/O, Google Cloud Dataflow allows you to create, monitor, and glean insights from a data processing pipeline. It evolved from Google’s MapReduce.

9. Maintenance

Google does routine testing and regularly send patches, but it also sets all virtual machines to live migrate away from maintenance as it is being performed.

“Compute Engine automatically migrates your running instance. The migration process will impact guest performance to some degree but your instance remains online throughout the migration process. The exact guest performance impact and duration depend on many factors, but it is expected most applications and workloads will not notice,” the Google developer website said.

VMs can also be set to shut down cleanly and reopen away from the maintenance event.

10. Load balancing

In June, Google announced the Cloud Platform HTTP Load Balancing to balance the traffic of multiple compute instances across different geographic regions.

“It uses network proximity and backend capacity information to optimize the path between your users and your instances, and improves latency by connecting users to the closest Cloud Platform location. If your instances in one region are under heavy load or become unreachable, HTTP load balancing intelligently directs new requests to your available instances in a nearby region,” a Google blog post said.

Taken from TechRepublic

 

Lina Deveikyte

Lina Deveikyte
Lina.Deveikyte@altabel.com 
Skype ID: lina_deveikyte
Marketing Manager (LI page)
Altabel Group – Professional Software Development

The practice of renting virtualised pools of servers and storage over the net is known as infrastructure as a service (IaaS), and is the most popular class of cloud service available today.

But most businesses are only making limited use of IaaS, with the majority restricting their use to spinning up application development and test environments or to rapidly provisioning extra server capacity during periods of heavy demand.

The reasons for this limited adoption are many: concerns about security of data and systems controlled by a third party, worries over the reliability of systems run by a cloud provider and served over the internet, and the premium paid for getting a vendor to provide infrastructure over running it in-house.

But where demand for IT services is uneven, fluctuating between high and low-demand, or where a business needs infrastructure to test applications for a short period or to try out a new endeavour, it can be more cost effective and far quicker to rent infrastructure from a cloud provider than attempting to build it in-house. There are even instances of companies like Netflix, which runs its entire IT operation on Amazon Web Services’ infrastructure.

AWS main IaaS offerings are EC2, which provides compute on demand and S3, which provides storage on demand. EC2 gives companies access to virtual machines or instances running an OS and applications of their choice over the internet, with these instances being configurable and controllable via web service APIs. Alongside and on top of these EC2 and S3 AWS provides a range of cloud offerings related to networking, load balancing, databases, data warehousing and big data analysis, as well as a range of management tools.

At the AWS Enterprise Summit in London on Tuesday businesses broadly outlined the ways they are using AWS today and lessons that can be learned from their experience.

Application development and testing
Developers and testers commonly use a self-service approach to draw computing resources from the likes of AWS EC2, S3 and Amazon’s block level storage service EBS. Typically this will be carried out via a self service portal, such as AWS’ own CloudFormation, or via some other form of API call.

Businesses often create self-service enterprise portals that automatically restrict how much computing resource should be provisioned and for how long based on governance and workflow requirements, and that tag the resources that are appropriate for different teams.

Businesses are using EC2 to enable standard build developer/test workstations, add integrated project management and issue tracking, to run popular source control systems and to drive build servers and continuous integration, according to Yuri Misnik, head of solutions architecture at AWS.

On the testing side EC2 instances are being used to allow unit and regression tests to be scaled up and run in parallel in a fraction of the time of doing it in-house, for A/B scenario testing to be run on replica stacks and for the creation of sandboxes for security testing, said Misnik. For testing how applications perform under load, he said, customers sometimes use spot instances – a pricing model where customers bid for time on unused EC2 instances – as a cost effective way of stressing applications.

AWS has a number of pricing models for renting infrastructure, based on how customers want to use it. For instance on-demand instances let customers pay for compute capacity by the hour with no long term commitments, while reserved instances require a one-time up front payment in return for a significant discount to the hourly rate. Customers can save a lot of money by ensuring the pricing model they use is best suited to their need, said Misnik, citing a customer that saved 45 percent cost by transitioning to reserved instances. AWS provides a tool Trusted Advisor, which makes recommendations on how customers can save money and improve performance or security.

Entertainment conglomerate Lionsgate, has used EC2 to develop and test SAP apps, reducing deployment time from weeks to days, as has Galata Chemicals, which reduced the cost of running development and testing operations by 70 percent by moving to EC2.

Building and running new application
The UK broadcaster Channel 4 launched its first live application on AWS in 2008 and today runs all of its new web apps on the infrastructure.

Describing the benefits of running apps on AWS, Bob Harris, chief technology officer with Channel 4, name-checked agility, scalability and resilience.

“We get servers up and running for teams in minutes, if it’s urgent, or hours,” he said.

The broadcaster sees “a huge increase in productivity” among teams building apps running on AWS, said Harris, because of the development team’s ability to deploy or destroy virtual servers and apps as and when they need to.

That freedom to spin up new instances has a downside, however.

“One of the things it lets you do is be inefficient far more efficiently,” said Harris.

“People tend to start instances, maybe they start more than they need or too big. So we’ve have a constant battle over the past couple of years making sure that we’re keeping our house in order behind us.”

Tools like Trusted Advisor are designed to help keep on top of this problem by flagging up the number of instances being used.

For a broadcaster that has to deal with spikes in traffic to its web sites and apps after popular TV programmes are broadcast, and doesn’t want to have to buy excess capacity for one time peaks in demand, the scalability of AWS was a good fit, said Harris.

“The peaky workloads are the important ones. In the past you had to explain to the marketing manager those 404s were a sign of success because it showed how much traffic came to your website. Today I can’t remember the last time that happened on a Channel 4 website.”

Harris estimates that the total cost of ownership for running these services on AWS is more than five times lower than running it off in-house infrastructure.

He stressed the need to build services that worked well with horizontal scaling across different EC2 instances as demand increased. Licensing of back-end software is another consideration, with Harris saying that there are still difficulties with software vendors being tied to a per machine or per CPU socket licensing model, which is obviously a poor fit for EC2 where software can be running on a varying number of virtual and physical machines based on demand.

“My personal view is that a significant number of proprietary models are simply not cloud-friendly because they don’t allow us to take advantage of that flexibility. Cloud plus open source is really the place you need to be if you want high scalability,” he said.

Proprietary software vendors are beginning to make concessions for running their software in the cloud, with Microsoft, SAP, Oracle and IBM offering licence mobility deals for their major software packages to AWS customers that are a better fit for cloud computing’s scalable pay per use model.

Augment on-premise systems and run hybrid apps
Hybrid apps are those that rely on a mixture of back-end services running on both in-house and AWS infrastructure.

AWS provides multiple features to help companies building hybrid apps integrate their datacentres with AWS infrastructure in a secure fashion, such as AWS Direct Connect and Virtual Private Cloud.

Access controls similar to those within in-house datacentres can be set on AWS infrastructure using its Identity and Access Management tools while AWS CloudHSM (hardware security modules) are ultra secure key management appliances which can be used by customers who have to follow stringent data protection regulations so they are able to move data onto AWS infrastructure.

Another approach taken by businesses is to offload certain systems to AWS: Nokia runs analysis on data stored on Amazon’s Redshift data warehousing platform, allowing it to reduce cost by up to 50 percent and run queries up to twice as fast compared to its previous data warehouse.

Channel 4 is using AWS Elastic Map Reduce (EMR) service, the AWS-hosted Hadoop framework running on EC2 and S3, to analyse web logs from its sites going back a number of years and hone ad-targeting and programme recommendations.

Channel 4’s Harris said that EMR provides a way for the broadcaster to experiment without the commitment of an up front investment, an important consideration when the outcome of big data analysis is uncertain.

“It’s about the cost of exit – how much money have I sunk if I’ve to walk away. In big data we’re all trying to work out ‘What’s the real value of a better ad target?’, it’s a hard analysis to do. Imagine if I wanted to ramp my physical platform by ten or more times, we’re talking tens of millions of pounds. By the time I’ve also hired the half a dozen people to run this thing this is seriously expensive.”

AWS’ Misnik also said some businesses are using AWS infrastructure as a cloud-based disaster recovery site, running anything up to hot standby environments with rapid failover.

Migrating existing apps to the cloud
Migrating apps running on in-house infrastructure to AWS is less common, as it presents a number of challenges.

Matthew Graham-Hyde, CIO of the media conglomerate Kantar Group, said getting a migrated app to work requires both re-engineering the app and working out the right mix of cloud infrastructure it needs to sit upon.

“It’s a very different working model when you take an application and re-engineer it for the cloud,” he said, for instance so it is able to scale across available instances based on demand and exploit the distributed nature of the cloud architecture to become more resilient to failure.

“You have to have everyone in the room – your infrastructure architects, sysadmins, business analysts, developers, consulting partners – and you’re ripping up installation after installation as you re-engineer this application to get the true benefits of being a cloud application. It’s a very iterative model.”

Kantar has migrated a number of apps to AWS, including a third party data visualisation tool whose running costs have dropped by 50 percent since the move.

AWS recommends migrating apps that are under-utilised, that have an immediate need to scale or that simply are the easiest to move. Examples of apps and systems that should prove more straightforward to migrate are, AWS claims: web apps, batch processing systems, content management systems, digital asset management systems, log processing systems, collaborative tools and big data analytics platforms.

Another company that claims to have benefited from shifting existing systems to AWS is the pharmaceutical firm Bristol-Myers Squibb, which migrated its clinical trial simulation platform, with the result that simulation times have been reduced from 60 hours down to 1.3 hours and reduced costs by 60 percent.

Everything in the cloud
Video streaming company Netflix is one of the few firms to have dispensed of its in-house datacentres entirely in favour running its entire infrastructure on top of AWS services.

The spiky nature of customer traffic means Netflix is a good match for the scalability offered by EC2. Netflix uses thousands of EC2 instances in multiple regions and across the various AWS availability zones to support more than 33 million customers worldwide.

Not having to run IT infrastructure has freed up the IT team at Netflix to devote time to improving the performance or features of the company’s IT services. But Netflix is also an example of the amount of work needed to go “all-in” on cloud – the company has devoted a lot of time to making AWS work as a platform for its business (going as far as to develop the Chaos Monkey software that breaks parts of production systems to test overall resiliency), the latency inside a distributed architecture and limitations on compute storage and networking that come with sharing a server’s resources with other customers.

Kristina Kozlova

Kristina Kozlova
Kristina.Kozlova@altabel.com
Skype ID: kristinakozlova
Marketing Manager (LI page)
Altabel Group – Professional Software Development

Debates about which programming language is the best are always hard and heated. Likewise, there’s no ideal language that works for all web application project requirements. Wikipedia is written in PHP. Gmail is written in Java. Python is number one choice of Google and YouTube. Ruby is used to create Twitter and Hulu. Slashdot is written in Perl. Stackoverflow is written in C#.

Browsing for the best web programming languages, among dynamic ones, you’ll mostly see PHP, Python and Ruby listed. Back in the days several years ago PHP was admitted the best tool for web job but since then both Python and Ruby have matured and grown robust libraries and frameworks around them that make them better candidates for many web projects now.

Today many consider PHP to be great for average everyday web systems. Python and Ruby are thought to be more suitable than PHP for most web applications in general and for more advanced things in particular. Just like PHP, they are free, open source, run on an open source stack (Apache and Nginx / linux, windows and BSD), and play well with any database engine. However, Ruby and Python have better syntax and they both enforce good programming habits by their nature, especially Python. PHP encourages sloppy spaghetti code by its nature. Also, the object oriented features in PHP are very ugly because of its arcane, retarded syntax.

Let’s get deeper insights into these two web programming languages from various standpoints:

As mentioned before, Python and Ruby are two of the most popular dynamic programming languages used in high level application development. In fact, Ruby was built using some of the design elements from Python. Developers often prototype using these two languages before implementing on compiled languages because of their modularity and object oriented features. Many use Python or Ruby instead of Perl as simple scripting languages. Python and Ruby are popular among web developers as well because of their rapid development cycle, with Python boasting computation efficiencies and Ruby boasting code design.

a/ Philosophy
Python really believes that code readability is the most important thing. Hence, there is one-true way of writing code, or as it has been reformulated lately: “There’s a most understandable way to do something and that is how it should be done”. Python is designed to strongly emphasize programmer productivity and it likes things to be structured, consistent, and simple. Python syntax enforces strict indentation rules; in fact, indentation has semantic meaning in Python.
Ruby believes in giving programmer the flexibility, freedom and power. It was designed, first and foremost, to make programming fun for its creator, with guiding concepts as follow: “The Principle of Least Surprise” and “There’s more than one way to do the same thing”. The latter philosophy principle inherited from Perl is the reason why many Ruby methods have alternate names, which may lead to some API confusion among new practitioners. However, this flexibility enables Ruby to be used as a meta language for describing DSL. Also Ruby provides a better way to write concise and compact code. More into the expressiveness of the code and writing code that is clever.
Python people like libraries to be transparent and obvious how they work and hence is easier to learn, while Ruby people tend to provide clean and pretty interfaces with “magic” behind the scenes. This makes development very fast when you understand the magic, but frustrating when you don’t.

b/ Ease of Use
Python is known for its ease of use. It allows beginners to start building powerful sites more quickly, and has the power to grow in complexity keeping its ease of comprehension. For example, one of the hardest parts of coding is going back to what you coded long ago and trying to remember the logic of it. Because Python uses natural language with white spaces and indenting, it is much more clear and easier to read than languages like Ruby. That makes it easier to fix mistakes or do updates. Also, there are literally thousands of pre-built modules that can be snapped on to let you get up and running on the web immediately. Its intuitive introduction to object-oriented coding concepts, such as communities, modules, and libraries, allows you to move on to other related programming languages as they develop.

c/ Object Oriented Programming
Both Python and Ruby support object oriented programming. Still Ruby’s object orientation is considered to be more ‘pure’ in that all functions exist inside a class of some sort. Python’s object orientation is more akin to that of C++, which allows functions and statements that exist outside of classes. In Ruby, even global variables are actually embedded within the ObjectSpace object. Python doesn’t have global variables, instead using attributes of module objects. In Python and Ruby, an instance of literally any type is an object. However, where in Ruby all functions and most operators are in fact methods of an object, in Python functions are first-class objects themselves.

d/ Syntax
Ruby includes several syntactic features which make dynamic extension of and higher-order interaction with external (library) code more straightforward. In particular these are blocks and mix-ins. Most things implementable with block and mix-in syntax are also achievable in Python, they are simply less syntactically natural and clear, and so less commonly form the centerpiece of major libraries or common styles of programming. These features, combined with a lighter-weight syntax with fewer restrictions (whitespace flexibility, optional parentheses, etc), make Ruby more suitable to pervasive and relatively transparent use of metaprogramming.
At the same time, while this flexibility and the Ruby community’s tendency to use it for metaprogramming can facilitate aesthetically pleasing code, they can also create stylistic variation in how the language is used, and obscure the mechanisms by which code actually works. Python’s more restrictive syntax is intentionally designed to steer developers towards one canonical “pythonic” style to improve accessibility and comprehension.

e/ Style
Ruby code is organized into blocks, with blocks starting with various constructs and ending with the keyword “end”. Python code is indentation-sensitive, with successively larger indentation meaning tighter (nested) scopes. Python’s syntax has been described as executable pseudocode.

f/ Functional Programming
Both languages support some functional programming constructs, but Ruby is arguably better suited to a functional programming style. Lambdas in Python are generally very short, because lambdas in Python are restricted to expressions and cannot contain statements. Ruby’s lambda support is far more flexible, allowing for lambda expressions of arbitrary length.

g/ Speed
The standard CPython implementation is generally regarded as executing code slightly faster than Ruby.If speed is really an issue for a Python project, you also have the option to use Cython, Pyrex,Pypy (JIT) or the ShedSkin tools to compile your code into C or C++.

j/ Features
Both Python and Ruby are high level application development languages. Each of them is estimated to have a Capers Jones language level of at least 15. Both languages promote test driven development.
Both languages have full Unicode support, although the way that support is implemented varies. Python distinguishes between “Unicode strings” and “byte-strings”. Ruby, on the other hand, treats all strings as byte-strings with a semi-hidden flag which causes problems when dealing with badly-encoded data from third-party sources.
Both Python and Ruby support multithreading. Python has the Global Interpreter Lock (GIL), which negates much of the potential advantage of using threads; Ruby has a comparable Global VM Lock (GVL).
There are a number of functions that are available by default in Ruby but for which in Python it is necessary to import a module from the standard library. Python supports generators and generator expressions.

k/ Community
There are great communities behind both frameworks. Some people believe that Python has a more developed community in terms of libraries suited for data analysis, machine learning, natural language processing, scientific libraries. As for community folks, Python ones are believed to be conservative and afraid of change, while Ruby guys welcome changes and love new shiny stuff even if it breaks older things. Consequently, Python world is more stable, and you can update your installation without much troubles, but that also means new technology is only added very slowly.

l/ Frameworks
There are a number of Web frameworks based on both Ruby and Python. The most notable and leading are Ruby on Rails (Ruby) and Django (Python) based on MVC. Django is more declarative, with it you’ll have a clearer understanding of what’s actually going on. It lets you specify most configuration details yourself. Django creates a much simpler project structure. On the other hand, the centerpiece of Rails’s philosophy is called convention over configuration. Rails provides you with more defaults.

m/ Popularity
Python is generally more widely used than Ruby, according to most measures, but in the wake of the rising popularity of the Ruby on Rails Web application development framework Ruby’s popularity too has seen rapid growth.
Python is more mature general purpose nature vs Ruby’s more niche (Rails) usage. Python is stronger and sees use in automating system administration and software development, web application development, data manipulation, analyzing scientific data (with help of numpy, scipy, and matplotlib modules), biostatistics, and teaching introductory computer science and programming. Ruby+Rails holds a slight edge over Python+Django for web development and sees use in general programming, and has more mindshare.
In terms of cloud deployment, Python can run on Google-Cloud (Google-App engine). Though Ruby has very strong cloud deployment options in the shape of Heroku and Engine Yard.

Would you prefer Python or Ruby over PHP for implementation of your web project? And is it indeed a philosophy that you chose while selecting between Python and Ruby? Interested to hear your thoughts.

Helen Boyarchuk

Helen Boyarchuk
Helen.Boyarchuk@altabel.com
Skype ID: helen_boyarchuk
Business Development Manager (LI page)
Altabel Group – Professional Software Development

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Developers are in a unique position to educate and to capitalize on cloud opportunities. Unlike learning new programming techniques or Frameworks, cloud learning moves beyond development. There are infrastructure aspects to consider as well as potential organizational process and policy changes. However, developers know the application and cloud administration is a much lower bar, than, for example network administration. If you’re looking for a strategy to follow to cloud enlightenment; you’re reading the right article.

Give the Cloud a Whirl
When it comes to the cloud, don’t wait for the storm to hit you, but rather educate yourself; there is no substitute for experimentation and hands-on experience. Start by separating reality from marketing. Almost every cloud vendor offers a free trial. For example: Microsoft Azure offers a free trial. If you are truly new to cloud development; imagine borrowing a company server for 3 months; only there is no setup time. Just turn in on and away you go.

Given that experimentation time is limited; go for breadth rather than depth. Get a taste of everything. What most developers find is; after some initial orientation and learning the experience becomes what they already know. For example: Azure has an ASP.NET based hosting model called Web Roles. After configuring and learning Web Role instrumentation, the development experience is ASP.NET. Learning Azure Web Roles amounts to learning some new administration and configuration skills; coupled with a handful of new classes. The rest of what you need to know is nothing new if you’ve done ASP.NET!

Developer must keep their time constrained. Struggling for hours with something new is often not worth the effort. One should question wide adoption of something that will be difficult to work with. Cloud offerings are typically not niche or differentiating skills like, for example, SQL Server tuning.

Whatever cloud option a developer starts with; understand the authentication options. Intranet developers typically take authentication for granted. ASP.NET makes authentication look easy. Consider all the moving parts involved in making authentication automatic and secure. Understanding authentication is especially important if parts of an application will live within the organization’s datacenter and within the cloud provider.

Finally, look for the right opportunities to apply these new skills.

Navigating the Fog
Most developers are adept at picking when to jump on new technology and when to pull back. Unlike adopting, for example, a new Web Services approach; adopting a cloud option entails learning a little more administration. The cloud can give a developer total control, but the cost is learning a bit more administration.

Developers may find themselves in new territory here. Typically a “hardware person” selects a machine and a “network person” selects and configures a firewall. Cloud portals make network and server configuration easier, but the portal doesn’t eliminate the configuration role. The public cloud handles the hardware; but the developer must choose, for example, how many; CPUs, servers, and load balancers will be needed. This lowers the administration bar, but also might place the burden on the developer.

The cloud will not be the right option for every project. Give the cloud a fair chance. Decision makers may have two reactions to cloud; outright rejection or wild-eyed embrace. Neither reaction is healthy. There is middle-ground. Don’t let unrealistic expectations set by marketing brochures guide the first project. A developer’s experiences described earlier in the article will be helpful here. Set the bar low. Make the first experience a good experience.

Supplementing with the Cloud
One potential approach is to supplement with the cloud. Let the cloud handle some part of the application. For example: requirements may dictate a web page to handle user registration. Registrations often have deadlines and, given human nature, people often procrastinate. Registration traffic is likely to spike the week or a few days before the deadline. Rather than purchasing servers to accommodate the spike; leaving usage idle for most of the year, do registration in the cloud. Dial up more servers the week before registrations are due and dial the server could back down the week after registrations are due.

Aside from technical change; cloud adoption may require organizational change.

Clouds Don’t Work in a Vacuum
I would bet good money that most developers reading this article have no idea which ports in their organization are closed to incoming TCP/IP connections. However knowing who to ask is far more important than what is known. In some sense every organization is its own private cloud. Networking professionals have been connecting things together longer than developers. Internet performance is considerably different than Intranet performance. Cultivate relationships with whoever operates your Firewall.

Passing through a Firewall is overhead. Your organization’s infrastructure may not be cloud ready. Though if your network people banter about DMZs; chances are your organization’s infrastructure is probably cloud ready. As stated earlier authentication is important to cover; forcing users to authenticate multiple times within an application is intolerable to most users.
Budgeting for servers may be different than budgeting for compute cycles. There may be concern over whether compute cycles will amount to more than purchasing a server or two. There is no shortcut here. Just like any other budgeting a developer must do the math. Again, this may be new territory for developers. Typically developers aren’t asked how much storage an application requires. Typically the storage cost is spread throughout the projects an organization conducts. Budgeting difficulties may be a good reason not to do a project. The upside is; after doing the math a developer will likely find that costs are far below buying the hardware.

Conclusion
The cloud gives a developer control over all components from administration to assemblies. Added control comes with a price. A developer must venture into some new territory. This article provided a path to follow.

What is your opinion on cloud opportunities? Is it worth to give a trial? What is your personal experience in adopting a cloud option? Maybe you have some thoughts to share!

Polina Mikhan

Polina Mikhan
Polina.Mikhan@altabel.com
Skype ID: poly1020
Business Development Manager (LI page)
Altabel Group – Professional Software Development

WHAT

In today’s business and technology world you can’t have a conversation without touching upon the issue of big data. Some would say big data is a buzzword and the topic is not new at all. Still from my point of view recently, for the last two-three years, the reality around the data has been changing considerably and so it makes sense to discuss big data so hotly. And the figures prove it.

IBM reports we create 2.5 quintillion bytes of data every day. In 2011 our global output of data was estimated at 1.8 billion terabytes. What impresses it that 90 percent of the data in the world today was created in the past two years according to Big Blue. In the information century those who own the data and can analyze it properly and then use it for decision-making purpose will definitely rule the world. But if you don’t have the tools to manage and perform analytics on that never-ending flood of data, it’s essentially garbage.

Big data is not really a new technology, but a term used for a handful of technologies: analytics, in-memory databases, NoSQL databases, Hadoop. They are sometimes used together, sometimes not. While some of these technologies have been around for a decade or more, a lot of pieces are coming together to make big data the hot thing.

Big data is so hot and is changing things for the following reasons:
– It can handle massive amounts of all sorts of information, from structured, machine-friendly information in rows and columns toward the more human-friendly, unstructured data from sensors, transaction records, images, audios and videos, social media posts, logs, wikis, e-mails and documents,
– It works fast, almost instantly,
– It is affordable because it uses ordinary low-cost hardware.

WHY NOW

Big data is possible now because other technologies are fueling it:
-Cloud provides affordable access to a massive amount of computing power and to loads of storage: you don’t have to buy a mainframe and a data center, and pay just for what you use.
-Social media allows everyone to create and consume a lot of interesting data.
-Smartphones with GPS offer lots of new insights into what people are doing and where.
-Broadband wireless networks mean people can stay connected almost everywhere and all the time.

HOW

The majority of organizations today are making the transition to a data-driven culture that leverages data and analytics to increase revenue and improve efficiency. For this a complex approach should be taken, so called MORE approach as Avanade recommends:
-Merge: to squeeze the value out of your data, you need to merge data from multiple sources, like structured data from your CRM and unstructured data from social news feeds to gain a more holistic view on the point. The challenge here is in understanding which data to bring together to provide the actionable intelligence.
-Optimize: not all data is good data, and if you start with bad data, with data-driven approach you’ll just be making bad decisions faster. You should identify, select and capture the optimal data set to make the decisions. This involves framing the right questions and utilizing the right tools and processes.
-Respond: just having data does mean acting on it. You need to have the proper reporting tools in place to surface the right information to the people who need it, and those people then need the processes and tools to take action on their insights.
-Empower: data can’t be locked in silos, and you need to train your staff to recognize and act on big data insights.

And what is big data for your company? Why do you use it? And how do you approach a data-driven decision-making model in your organization?

Would be interesting to hear your point.

Helen Boyarchuk

Helen Boyarchuk
Helen.Boyarchuk@altabel.com
Skype ID: helen_boyarchuk
Business Development Manager (LI page)
Altabel Group – Professional Software Development

There is no doubt that 2012 will be another big year for BI and information management. In the article we`ve tried to gather what we suppose are the top BI trends for near future

Big Data → Need for Speed

The rise in volume (amount of data), velocity (speed of data) and variety (range of data) gives way to new architectures that no longer only collect and store but actually use data: on-demand or real-time BI architectures will replaces traditional datawarehouses. Successful business intelligence projects will need to consider Big Data as part of their data landscape for the value that it delivers. More and more organizations will look toward statistics and data mining to set strategic direction and gain greater insights to stay ahead of the pack.At the same time the BI user is expecting faster answers from their BI environment disregarding the fact that the size of data is increasing.

Shift from analytical BI to operational BI

Increased adoption of cloud and mobile BI encourage individuals to access their KPI dashboards (key performance indicators), more often. An operational dashboard works much like a car’s dashboard. As you drive, you monitor metrics that indicate the current performance of your vehicle and make adjustments accordingly. When the speed-limit changes, you check your speedometer and slow down, or when you see you are out of gas you pull over and fill-up. Likewise, an operational dashboard allows you to make tactical decisions based on current performance, whether it is chasing a red-hot lead or ordering an out-of-stock product.

Data democracy

Latest surveys showed that only 25% of employees in businesses that adopted BI had access to that tool. And that is not because they didn`t want to or didn`t need information, but because traditional BI tools have been too bulky and technical for that other 75% of employees to use.
As now organizations more and more are adopting cloud and mobile BI dashboards, this situation is likely to change. Business intelligence is heading towards simpler, more straightforward methods and tools..

Agile

An Agile approach can be used to incrementally remove operational costs and if deployed correctly, can return great benefits to any organization. Agile provides a streamlined framework for building business intelligence/data warehousing (BIDW) applications that regularly delivers faster results using just a quarter of the developer hours of a traditional waterfall approach.

It allows you to start a project after doing 20 per cent of the requirements and design that deliver 80 per cent of the project’s value. The remaining details are filled in once development is underway and everyone has a good look at what the challenges actually are.

BI going mobile

In a survey conducted by Gartner, it was found that by 2013 one-third of all BI usage will be on a mobile device, such as a smart-phone or tablet. BI users want to access their data anytime and anywhere. This puts a demand on both the backend of any BI solution (like datawarehouse appliances) but also on the frontend where information access and visualization must be possible.

BI going up to the Cloud

As Cloud computing continues to dominate the whole IT landscape, so BI also dominates in the Cloud . Throughout next few years adoption of cloud BI tools will be driven by a number of important factors. First, cloud-based solutions offer the advantage of being relatively simple and convenient to deploy. Second, cloud tools are more easily scalable to provide access to key performance indicators (KPIs) to everyone in your organization, no matter where they are or what device they are using. Lastly, continually improving security measures will put to rest any reservations businesses have with storing their sensitive data in the cloud.

We believe these above enumerated areas will grow over the next few years. Organizations will embrace the Agile approach, utilizing new tools and technologies to decrease delivery times and demonstrate substantial business value. As we put more data into the Cloud, big data will become standard. Data itself will be delivered to satisfy the desires of users, so access from mobile devices will dominate desk-based consumption. The businesses that embrace these new business intelligence trends, and take steps to change and adapt the way data is hosted, analyzed, utilized and delivered, will be the ones that grow and prosper in the near future.

And what are your predictions for the big business intelligence trends in the next few years? Do you agree/disagree with our predictions?

Kind regards,
Anna Kozik – Business Development Manager (LI page)
Anna.Kozik@altabel.com
Altabel Group – Professional Software Development


%d bloggers like this: