Altabel Group's Blog

Author Archive

Introducing ASP.NET Core:

ASP.NET Core is a new open-source and cross-platform framework for building modern cloud based internet connected applications, such as web apps, IoT apps and mobile backends. ASP.NET Core apps can run on .NET Core or on the full .NET Framework. It was architected to provide an optimized development framework for apps that are deployed to the cloud or run on-premises. It consists of modular components with minimal overhead, so you retain flexibility while constructing your solutions. You can develop and run your ASP.NET Core apps cross-platform on Windows, Mac and Linux. ASP.NET Core is open source at GitHub.

The framework is a complete rewrite that unites the previously separate ASP.NET MVC and Web API into a single programming model.

Despite being a new framework, built on a new web stack, it does have a high degree of concept compatibility with ASP.NET MVC.

ASP.NET Platform exists for more than 15 years. In addition, at the time of System.Web creation it contained a large amount of code to support backward compatibility with classic ASP. During this time, the platform has accumulated a sufficient amount of code that is simply no longer needed and is deprecated. Microsoft faced a difficult choice: to abandon backward compatibility, or to announce a new platform. They chose the second option. At the same time, they would have to abandon the existing runtime. Microsoft has always been a company focused on creation and launch on Windows. ASP.NET was no exception. Now the situation has changed: Azure and Linux occupied an important place in the company’s strategy.

The ASP.NET Core is poised to replace ASP.NET in its current form. So should you switch to ASP.NET Core now?

ASP.NET Core is not just a new version. It is a completely new platform, the change of epochs. Switching to ASP.NET Core can bring many benefits: compact code, better performance and scalability. But what price will be paid in return, how much code will have to be rewritten?

.NET Core contains many components, which we are used to deal with. Forget System.Web, Web Forms, Transaction Scope, WPF, Win Forms. They no longer exist. For simple ASP.NET MVC-applications changes will be minor and the migration will be simple. For more complex applications, which use a great number of .NET Framework classes and ASP.NET pipeline situation is more complicated. Something may work and something may not. Some part of the code will have to be rewritten from scratch. Additional problems may be caused by WebApi, because ASP.NET MVC subsystems and WebAPI are now combined. Many libraries and nuget-packages are not ready yet. So, some applications simply will not have a chance to migrate until new versions of the libraries appear.

I think we are waiting for the situation similar to the transition from Web Forms to ASP.NET MVC. ASP.NET Framework will be supported for a long time. First, only a small amount of applications will be developed on ASP.NET Core. Their number will increase, but sooner or later everyone will want to move to ASP.NET Core. We still have many applications running on the Web Forms. However, no one comes to mind to develop a new application on the Web Forms now, everybody chooses MVC. Soon the same happens to ASP.NET Framework, and ASP.NET Core. ASP.NET Core offers more opportunities to meet modern design standards.

The following characteristics best define .NET Core:

  • Flexible deployment: Can be included in your app or installed side-by-side user- or machine-wide.
  • Cross-platform: Runs on Windows, macOS and Linux; can be ported to other OSes (Operating Systems). The supported OSes, CPUs and application scenarios will grow over time, provided by Microsoft, other companies, and individuals.Command-line tools: All product scenarios can be exercised at the command-line.
  • Compatible: .NET Core is compatible with .NET Framework, Xamarin and Mono, via the .NET Standard Library.
  • Open source: The .NET Core platform is open source, using MIT and Apache 2 licenses. Documentation is licensed under CC-BY. .NET Core is a .NET Foundation project.
  • Supported by Microsoft: .NET Core is supported by Microsoft, per .NET Core Support.

The Bad:

  • As for the “cons” one of the biggest issues are gaps in the documentation. Fortunately most of the things for creating and API are covered, but when you’re building an MVC app, you might have problems.
  • Next problem – changes. Even if you find a solution to your problem, it could have been written for a previous version and might not work in the current one. Thanks to open source nature of it, there is also support available on github. But you get same problems there (apart from searching).
  • Another thing is lack of support in the tooling. You can forget about NCrunch or R# Test Runner. Both companies say they will get to it when it gets more stable.
  • ASP.NET Core is still too raw. Many basic things, such as the Data Access, is not designed for 100%. There is no guarantee that the code you are using now will work in the release version.

The Good:

  • It’s modular. You can add and remove features as you need them by managing NuGet packages.
  • It’s also much easier and straightforward to set up.
  • WebApi is now part of the MVC, so you can have class UserController, which will return a view, but also provide a JSON API.
  • It’s cross-platform.
  • It’s open-source.

ASP.NET Core is the work on the bugs of the classic ASP.NET MVC, the ability to start with a clean slate. In addition, Microsoft also aims to become as popular as Ruby and NodeJS among younger developers.
NodeJS and ASP.NET have always been rivals: both – a platform for backend. But in fact, between them, of course, there was no struggle. The new generation of developers, the so-called hipster developers, prefer Ruby and Node. The adult generation, people from the corporate environment, are on the side of .NET and Java. .NET Core is clearly trying to be more youthful, fashionable and popular. So, in future we can expect the .NET Core and NodeJS to be in opposition.

In its advertising campaign, Microsoft is betting on unusual positions for it: high performance, scalability, cross-platform. Do you think that ASP.NET “crawls” on the territory of NodeJS? Please feel free to share your thoughts with us.

Thank you in advance!


Darya Bertosh

Darya Bertosh

Business Development Manager | LI Profile

Skype: darya.bertosh

JavaScript is the most accessible cross-platform language nowadays. It’s used both on front-end and back-end website development.

Using it Altabel developers create web-apps which includes offline mode, desktop apps, apps for smartphones and tablets, add-ins for Microsoft Office, SharePoint and Dynamics. And if you don’t get acquainted with JavaScript yet we strongly believe that you should do it immediately!

I reckon many of us know there are plenty of different languages compiled in JavaScript. It’s CoffeeScript, Dart, GorillaScript and others. To be fair some of these languages are fly-by-night creations that have never really taken off in the wild. But many of these languages are major engineering efforts with large ecosystems and large corporate backers. With so many frameworks and languages out there it can be difficult to figure out which one is the best.

In 2012 Microsoft analyzed the situation and created a new language with a possibility of dealing with problems and using existing JavaScript insights. Thus, a free open source programming language TypeScript was developed and maintained by Anders Hejlsberg (co-creator of Turbo Pascal, Delphi and C#). From the very beginning the new language started expanding rather quickly due to its flexibility and productivity. Considerable amount of projects written in JavaScript began to transfer to TypeScript. Popularity and relevancy of the new language leaded to the fact that lots of TypeScript ideas became the part of new JavaScripts standard afterwards. And moving forward the AngularJS 2.0 version (today one of the most popular web frameworks) was completely written on TypeScript with the help of Microsoft and Google.

But why TypeScript?

Let’s review main reasons of its popularity:

  • TypeScript is a typed superset of JavaScript. In other words any valid JavaScript code is also valid for TypeScript.
  • TypeScript may be used to develop JavaScript applications for client-side or server-side execution.

Microsoft’s TypeScript seems to generate the most attractive code and is considered to be one of the best JavaScript front-ends. TypeScript adds sweetness, but at a price.

  • TypeScript can also be used with existing JavaScript frameworks/libraries such as Angular, jQuery, and others and can even catch type issues and provide enhanced code help as you build your apps.
  • TypeScript can be just the right fit for projects in which developers try to remain relevant without the need to learn a whole new syntax.

The ubiquity of JavaScript as a runtime has inspired people from a variety of programming backgrounds to recreate JavaScript as they see fit. And yes, TypeScript lets you write JavaScript the way you really want to.

  • TypeScript differs from JavaScript with possibility of evident static objectives, with usage maintenance of full-blown classes (just as in traditional object-oriented languages), and also with maintenance of logging on modules. It’s aimed at development speed raising, simplifying of legibility, refactoring and reusability of your code.
  • TypeScript has many additional language features but defining types and creating classes, modules, and interfaces are some of the key features it offers.
  • In TypeScript the same types are supported as well you would expect it in JavaScript. Types enable TypeScript developers to use highly-productive development tools and practices: static checking, symbol-based navigation, statement completion and code refactoring.
  • TypeScript implements many conceptions that are appropriate to object-oriented languages such as extending, polymorphism, encapsulation, accessibility modifiers and so on.
  • Lots of TypeScript features have strict rules, so various code formatting errors are excluded. Which means that the possibility of incorrect implementation or inaccurate method invocations is eliminated.
  • TypeScript potentially allows writing large complex programs more quickly. Thereafter it’s easier to maintain, develop, adjust to scale and test them in comparison with standard JavaScript.


TypeScript has a number of other positive features that are out of the scope of this article. On the other hand, there are two significant minuses exist.

  • Probably, the biggest minus is entry threshold and number of specialists on the market. Nowadays there are not so many specialists with solid experience in this language.
  • In comparison with JavaScript, it’s needed more time for the development. It stems from the fact that apart from class implementation one should describe all enabled interfaces and method signatures.

TypeScript 2.0

There are some significant changes coming in TypeScript 2.0 that continue to deliver on the promise of making JavaScript scale. This includes a whole rewrite of the way types are analysed in the flow of the code and an opt-in feature that starts to deal with logic errors around the flexibility of things being undefined or null in JavaScript. Other features planned for TypeScript 2.0 include read-only properties and async/await downlevel support.

TypeScript creator Anders Hejlsberg already has plans for TypeScript 2.1 and beyond. Features envisioned for these releases include a new JavaScript language service in Microsoft’s Visual Studio software development platform and more refactoring support.

The most recent version, TypeScript 1.8, rolled out in February, includes several more features like F-Bounded polymorphism, string literal types, etc.


So, if you haven’t taken a look at TypeScript, I have hopefully convinced you that it is something to at least worth a bit of your time. It has some of the best minds focused on making JavaScript scale and the team is going about it in a way that is open and transparent. By embracing the reality of JavaScript and building on top of it, in my opinion TypeScript is transforming the common language of the web, for the better.

We will be happy to hear how you use TypeScript in your current projects, if you like it, if you are planning to switch to this language, what are the pros and cons in your opinion, etc. Feel free to share with your thoughts in comments below!


Victoria Sazonchik

Victoria Sazonchik

Business Development Manager | LI Profile

Skype: victoria_sazonchik


“Computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty.”
Donald Knuth, 1974


It’s better to start your journey into the career of programming by answering the question “Do you really need programming?” This question does not apply to those, who majored in computer programming or was close to it. If at school you were good at math, if you like to spend a lot of time sitting in front of the computer, if you want to learn something new, then programming is for you. What is more, this area is now in demand and highly paid in the world, job vacancies for the post of programmers are always open. Isn’t it the best time to be a programmer?:)

Everyone knows that the future programmer should be able to think broadly and to present the project from different perspectives before its implementation and realization. Unfortunately, the machine does not understand a human language. Of course, I’m not talking about Siri and other voice recognition — I’m talking about the creation of new software. To create the calculator, the computer needs to be given the task in the same way as the foreman explains to workers how to lay bricks. That’s why you can’t do anything without understanding the programming languages. Well, first you need to decide what kind of programming languages we should start with.

And here everyone chooses a language which will be useful for him. It depends on the kind of products you are going to develop. Most of us studied Turbo Pascal at school, and it’s no news that this language is practically not used anymore. So, if you want to join the team of programmers in the nearest future, the choice of language should be made sensibly.

Among the most popular programming languages in 2016 are Java, followed by C languages, then Python, JavaScript, PHP, Ruby, etc. It should come as no surprise that the more popular language is, the more chances you have to find work in the future. So, you’d better start with Java or C#, as these are the best paid and relatively simple learning languages of writing code. If you can’t cope with them, then you should try to learn Python. This language suits for quick and effective programming.

But if you have no programming experience at all you can start with something more simple for understanding. Good examples can be the basics of HTML and CSS.

Why? These two languages are essential for creating static web pages. HTML (Hypertext Markup Language) structures all the text, links, and other content you see on a website. CSS is the language that makes a web page look the way it does—color, layout, and other visuals we call style. Well, if you are interested in making websites, you should definitely start with HTML and CSS.

Let’s move to JavaScript. It is the first full programming language for many people. Why? It is the next logical step after learning HTML and CSS. JavaScript provides the behavior portion of a website. For example, when you see that a form field indicates an error, that’s probably JavaScript at work.

JavaScript has become increasingly popular, and it now lives outside web browsers as well. Learning JavaScript will put you in a good place as it becomes a more general-purpose language.

Some people also suggest choosing Python as the first programming language because Python’s program code is readable, first of all. You don’t even need to be a programmer to understand what is happening in the program. Due to the simple syntax of Python you will need less time for writing programs than in Java, for example. A huge base of libraries will save you a lot of strength, nerves and time. Large technology companies are working with Python: Yandex, Google, Facebook and YouTube. It is used for web applications, game development, software for servers.

Java can also be a good choice for a beginner. This language is more popular than Python, but a bit more complicated. At the same time, the development tools are much better designed. Java is one of the most popular languages for the backend development of modern enterprise web applications. It is used in Amazon, eBay, LinkedIn and Yahoo! With Java and the frameworks based on it, developers can create scaling web apps for a wide range of users. Java is also the primary language used for developing Android applications for smart phones and tablets. Moreover, after Java you will be able to work with low level programming languages.

PHP is one more popular language. The PHP language, along with databases (e.g. MySQL) is an important tool for creating modern web applications. Most of the sites developed on PHP are focused on a large amount of data. It is also a fundamental technology of powerful content management systems like WordPress. There are no normal imports in PHP, there are many solutions to one and the same problem. And it makes training more complicated.


The languages C and C# are a bit complicated for a beginner. But if you develop software for embedded systems, work with system kernels or just want to squeeze out every last drop from all available resources, C is what you need.

Ruby has begun to gain popularity since 2003, when the framework Rails appeared. Used widely among web startups and big companies alike, Ruby and Rails jobs are pretty easy to come by. Ruby and Rails make it easy to transform an idea into a working application, and they have been used to bring us Twitter, GitHub, and Treehouse.

Choosing a programming language may still seem challenging. It shouldn’t. You can’t go wrong. As long as you choose a language that is regularly used in technology today, you’re winning. When you are starting out, the goal is to become solid in the basics, and the basics are pretty similar across almost all modern programming languages.

Part of learning to code is learning a language’s syntax (its grammatical or structural rules). A much bigger part of learning to code, the part that takes longer and gives you more headaches, is learning to solve problems like a programmer. You can learn the grammatical structure of the English language pretty quickly; however, you won’t truly understand the language until you put that grammatical structure to use in a conversation. The same is true in programming. You want to learn the core concepts in order to solve problems. Doing this in one language is similar to doing it in another. Because the core concepts are similar from language to language, I recommend sticking with whichever language you choose until your understanding of the core concepts is solid. If you have a clear idea of your reasons for learning to program, and know exactly what you want to accomplish with your new coding skills, then you’ll be able to make the right choice.

How did you guys get into programming? What are the best programming languages for first-time learners?

Please, share with us your experience and opinion here below:)


Kate Kviatkovskaya

Kate Kviatkovskaya

Business Development Manager | LI Profile
Skype: kate.kviatkovskaya

Let`s start from a bit of history. React.js is a JavaScript library for building UIs. It was created by Facebook development team to deal with large applications with data that changes over time: react.js hits the “refresh” button any time data changes, and knows to only update the changed parts. Firstly, react was used in-house at Facebook and then it was released as an open-source project and it has quickly gained popularity among developers.

Facebook is not the only one to use React:

Instagram is 100% built on React, both public site and internal tools ;

Yahoo`s mail client is made in React;

Netflix – the biggest paid video-streaming service;

Sberbank, bank #1 in Russia, is built with React;

Khan Academy uses React for most new JS development.

React in comparison to Angular.js isn`t a complete framework. However we can`t say that React.js is only “V” in the MVC. After a closer look, you can actually see that React.js is more than just “V”, it has quite some features of the C (controller) part as well. This is why React is so confusing to understand.

Let`s see why React.js stands out from the crowd:

Convenient architecture

Flux – is highly competitive to MVC. One-way data flow provides maintainability and efficient arrangement of data and DOM elements.

Virtual DOM

React developers suggested using “virtual DOM” in order to solve performance issue for websites with too dynamic DOM. All changes in a document are made there first, and then React looks for the shortest path to apply them in a real DOM tree. This approach makes the framework fast.


React is fundamentally different than other front-end frameworks in that each asset is made up of many isolated components. Want a button changed across the whole platform? Change it once and voilà it`s changed everywhere.

By making the creation, distribution and consumption of isolated reusable components more straightforward, developers are better able to save time by using and creating common abstractions. This is true of both low level elements like buttons and high level elements such as accordions.


React.js uses a special syntax called JSX, which allows to mix HTML with Javascript. Markup and code are composed in the same file. This means code completion gives you a hand as you type references to your component’s functions and variables.

SEO  friendly

React is significantly more SEO friendly than most JavaScript MVC frameworks. As it is based on a virtual DOM you can use it on the server without needing a headless browser on the server such as Phantom.js to render pages to search engine bots.

React.js is a new interesting emerging Javascript library. It does have some drawbacks however it`s an excellent alternative for building large apps where data changes quickly. We are curious to hear about your experience in using React.jsJ Have you tried it?


Anna Kozik

Anna Kozik

Business Development Manager |LI Profile 
Skype: kozik_anna

First there existed e-cash, which then grew into Bitcoin. Since then an entire new world of currency has emerged known as crypto currency, which is a virtual, decentralized currency system that actually only exists as a computer file with transactions recorded in a ledger. It’s an Internet-based system of money that goes beyond traditional currency exchange and offers a global system that can be used by everyone for buying products and services here and there and everywhere.

No question Bitcoin has been capturing the world very fast. Still although Bitcoin is the pioneer and beyond all doubt the most popular ‘crypto currency’, it is not the only one. A number of other crypto currencies are appearing thus offering alternatives and are just as valuable and actively traded. Below there are some examples of this crypto currency:

  • Litecoin:

Litecoin is a peer-to-peer Internet currency that enables instant, near-zero cost payments to anyone in the world. It was released via an open-source client on GitHub on October 7, 2011 by Charles Lee, a former Google employee. Litecoin is an open source, global payment network that is fully decentralized without any central authorities. Mathematics secures the network and empowers individuals to control their own finances. Litecoin features faster transaction confirmation times and improved storage efficiency than the leading math-based currency. With substantial industry support, trade volume and liquidity, Litecoin is a proven medium of commerce complementary to Bitcoin.

  • Dogecoin:

Dogecoin is another currency from the family of crypto currencies. Dogecoin, which has a Shibu Inus (a bread of a Japanese dog) as its logo, was created by Billy Markus and Jackson Palmer. It presents itself broadly based on the Bitcoin protocol, but with modifications and uses scrypt technology as a proof-of-work scheme. It has a block time of 1 minute and the difficulty retarget time is four hours. There is no limit to how many Dogecoin can be produced i.e. the supply of coins would remain uncapped. Dogecoin deals with large numbers of coin that are lesser in value individually, making the currency more accessible with a low entry barrier and fit for carrying out smaller transactions.

  • Peercoin:

Peercoin, also referred to as PPCoin, Peer-to-Peer Coin and P2P Coin, was created by software developers Sunny King (a pseudonym) and Scott Nadal. It was launched in August 2012 and was the first digital currency to use a combination of proof-of-stake and proof-of-work. The coins are initially mined through the commonly-used proof-of-work hashing process but as the hashing difficulty increases over time, users are rewarded with coins by the proof-of-stake algorithm, which requires minimal energy for generating blocks. This means that over time, the network of Peercoin will consume less energy. Peercoin is an inflationary currency since there is no fixed upper limit on the number of coins.

  • Primecoin:

Primecoin is an altcoin with a difference. Developed by Sunny King (who also developed Peercoin), its proof-of-work is based on prime numbers, which is different from the usual system of hashcash used by most crypto currencies based on the Bitcoin framework. It involves finding special long chains of prime numbers (known as Cunningham chains and bi-twin chains) and offers greater security and mining ease to the network. These chains of prime numbers are believed to be of great interest in mathematical research.

  • Dash (Previously known as Darkcoin):

Offering more anonymity than other crypto currency, Dash uses a decentralized master code network called Darksend that turns the transactions into nearly untraceable ones. Dash can be mined using a CPU or GPU. Its fan following started building soon after its 2014 launch. It was rebranded from “Darkcoin” to “Dash” on March 25, 2015, a blend of “Digital Cash”.

  • Namecoin:

As another offshoot of Bitcoin, this decentralized open source information and transfer system offers an additional option that is focused on continued innovation in the altcoin industry. It is known for being the first to implement a decentralized DNS, which allows it to operate outside of the regular Internet and governance by Icann, and merged mining.

  • DevCoin:

Billing itself as ethical crypto currency, this currency was created with the intent of helping to fund any type of open-source project that someone wanted to build. Started in 2011, is based on Bitcoin, but mining is considered to be much easier. It is slowly adding merchants that will accept this type of crypto currency.

  • Feathercoin:

Based on litecoin, this cryptocurrency offers regular updates with new features, including a mining calculator, QR code generator, button generator and feathercoin API as well as digital wallets for Mac, Linux, Windows and Android. Other features include protections from forking by group mining.

  • Ven:

San Stalnaker created this digital currency for his business club known as Hub Culture. Launched in 2007, the currency is entirely backed by reserve funds to remove the risk of inflation as much as possible. The digital currency is now focused on socially responsible business segments with the intent of creating a currency that supports the environment versus traditional currencies that do not.

  • Novacoin:

The digital currency bills itself as using hybrid proof-of-work and proof-of-stake block generation methods, differentiating itself from other altcoins. The protection scheme integration aims to deter any abuse by mining groups.

  • Megacoin:

This is one of the few digital currencies to truly focusing on branding itself to make it more mainstream for audiences around the world. Based in New Zealand, it concentrates on a very consumer-friendly approach with the selling point that this is “the people’s currency.”

So there are a number of alternatives to Bitcoin that are competing for attention. Perhaps not all of them could move forward, but the idea is that many people and businesses all over the world are step-by-step becoming accustomed to the concept of a new type of currency. The idea of crypto currency is no doubt catching on as more people to see the potential for how it can be used to the local and global economies.

How do you feel about the concept of crypto currency and what future do you predict for it? Which crypto currency in your opinion does deserve attention except for Bitcoin? I’ll be glad to hear your thoughts in the comments below.

Yuliya Tolkach

Yuliya Tolkach

Business Development Manager | LI Profile
Skype: yuliya_tolkach

If the experts’ estimates regarding IoT are correct, it means that in 5-10 years there will be more than 50 billion interconnected devices in the world. And they all will generate zettabytes of data, which can be and should be collected, organized and used for various purposes. Hence the tight correlation between IoT and Big Data is hard to ignore, because IoT and Big Data are like Romeo and Juliet – they are created for each other. The unprecedented amount of data produced by IoT would be useless without the analytic power of Big Data. Contrariwise, without the IoT, Big Data would not have the raw materials from which to model solutions that are expected of it.

What are the impacts of IoT on Big Data?

The IoT revolution means that almost every device or facility will have its own IP address and will be interconnected. They are going to generate a huge amount of data, spewing at us from different sides – household appliances, power stations, automobiles, train tracks and shipping containers etc. That’s why the companies will have to update technologies, instruments and business processes in order to be able to cope with such great amount of data, benefit from its analysis and finally gain profit. The influence of Big Data on IoT is obvious and it is conducted by various means. Let’s take a closer look at the Big Data areas impacted by IoT.

Methods and facilities of Data Storage

IoT produces a great and stable flow of data, which hits companies’ data storage. In response to this issue, many companies are shifting from their own storage framework towards the Platform as a Service (PaaS) model. It’s a cloud-based solution, which supports scalability, flexibility, compliance, and an advanced architecture, creating a possibility to store useful IoT data.

There are few options of models in the modern cloud storage: public, private and hybrid. Depending on the specific data nature, the companies should be very accurate while choosing a particular model. For instance, a private model is suitable for the companies who work with extremely sensitive data or with the information which is controlled by the government legislation. In other cases, a public or hybrid option will be a perfect fit.

Changes in Big Data technologies

While collecting the relevant data, companies need to filter out the excessive information and further protect it from getting attacked. It presupposes using highly productive mechanism that comprises particular software and custom protocols. Message Queue Telemetry Transport (MQTT) and Data Distribution Service (DDS) are two of the most widely used protocols. Both of them are able to help thousands of devices with sensors to connect with real-time machine-to-machine networks. MQTT gathers data from numerous devices and puts the data through the IT infrastructure. Otherwise, DDS scatters data across devices.

After receiving the data, the next step is to process and store it. The majority of the companies tend to install Hadoop and Hivi for Big Data storage. However there are some companies which prefer to use NoSQL document databases, as Apache CouchDB and others. Apache CouchDB is even more suitable, because it provides high throughput and very low latency.

Filtering out redundant data

One of the main challenges with Internet of Things is data management. Not all IoT data is relevant. If you don’t identify what data should be transmitted promptly, for how long it should be stored and what should be eliminated, then you could end up with a bulky pile of data which should be analyzed. Executive director of Product Marketing Management at AT&T, Mobeen Khan, says: “Some data just needs to be read and thrown away”.

The survey carried out by ParStream (an analytical platform for IoT) shows that almost 96 % of companies are striving to filter out the excessive data from their devices. Nevertheless only few of them are able to do it efficiently. Why is it happening? Below you can see the statistics, depicting the main problems which most of the companies are facing with the data analysis procedure. The percentage figure points out the percentage of the respondents to the ParStream survey confronting the challenge.

• Data collection difficulties – 36%
• Data is not captured accurately – 25%
• Slowness of data capture – 19%
• Too much data to analyze in a right way – 44%
• Data analyzing and processing means are not developed enough – 50%
• Existing business processes are not adjustable to allow efficient collection – 24%

To perform the action of filtering out the data effectively, organizations will need to update their analysis capabilities and make their IoT data collection process more productive. Cleaning data is a procedure that will become more significant to companies than ever.

Data security challenges

The IoT has made an impact on a security field and caused challenges which can’t be resolved by traditional security systems. Protecting Big Data generated from IoT arouses complications as this data comes from various devices, producing different types of data as well as different protocols.

The equally important issue is that many security specialist lack experience in providing data security for IoT. Particularly, any attack can not only threaten the data but also harm the connected device itself. And here is the dilemma when a huge amount of sensitive information is produced without the pertinent security to protect it.

There are two things that can help to prevent attacks: a multilayered security system and a thorough segmentation of the network. The companies should use software-defined networking (SDN) technologies combined with network identity and access policies for creating a dynamic network fragmentation. SDN-based network segmentation also should be used for point-to-point and point-to-multipoint coding based on the merger of some software-defined networking and public key infrastructure (SDN/PKI). In this case data security mechanisms will be keeping pace with the growth of Big Data in IoT.

IoT requires Big Data

With the emerging of IoT step by step many questions arises: Where is the data coming from IoT going to be stored? How is it going to be sorted out? Where will the analysis be conducted? Obviously, the companies which will be able to cope with these issues the next few years are going to be in prime position for both profits and influence over the evolution of our connected world. The vehicles will become smarter, more able to maintain larger amounts of data and probably able to carry out limited analytics. However as IoT grows and companies grow with IoT, they will have many more challenges to resolve.

What do you think about the evolving of Big Data in IoT? Have you already experienced the challenges of Big Data in IoT? And do you have any ideas about the progressive solutions to these challenges? I’ll be happy to hear your opinion in the comments below. Please, feel free to share your thoughts.


Anastasiya Zakharchuk

Anastasiya Zakharchuk

Business Development Manager | LI Profile

Skype: azakharchuk1

Over the years, PHP has evolved greatly and now it’s not just the most popular server-side scripting language but also the language used to build complex websites and web apps. The same could be told about its frameworks. PHP web frameworks have an ecosystem of their own in the world of web development. PHP frameworks are used to build websites and web applications of all sizes and complexity, ranging from small static websites to large scale complex enterprise content management systems.

Still there are different opinions on the question which PHP framework is the best, as some developers prefer performance, some prefer better documentation, some prefer lots of built-in functions, etc. Perhaps we should have a look at the frameworks depending how popular they are.

Different frameworks have been popular in different time. For instance, CodeIgniter framework remained the top choice for PHP developers from 2011 to mid 2014. However, later in 2014 a new PHP framework Laravel has gained its popularity and became the most used framework in 2015. Now in 2016 it is clear that the Laravel framework will remain at the top, due to the huge interest from developers and clients worldwide.

  1. Laravel

It’s already been said that Laravel is most famous PHP frameworks nowadays. It is very secure and have a lot of useful libraries like session, authentication, middleware, RESTapi and others are included in it. PHP developers choose to work on the Laravel framework because its large and gradually growing community and very good functionality. You don’t need to write more code because every basic and required code-blocks are pre-build on it. At the same time it’s mostly used by experts.


– Routing and middleware are the best feature of Laravel

– Laravel uses the blade template engine for generating various views

– Inherent Database Version control

– Built-in unit testing and simply readable impressive syntax

– Larger Community catering to thousands of progarmmers

  1. CodeIgniter

CodeIgniter is the second most popular web framework among PHP developers. It is a lightweight powerful PHP framework that provides simple and elegant platform to create full-featured web applications. Choosing CodeIgniter you get all the tools you need in one little package. It’s easy to understand and to extend.


– Develop using MVC pattern

– No PHP Version Conflicts

– Less Duplication of Code

– Most Active Online Community

– Cache Class

– Security and Encryption

– Little to no server requirements

  1. Yii

YiiFramework is the high-performance modern PHP framework. It attracts most of PHP developers due to its features like fast development, caching, authentication and role-based access control, scaffolding, testing, etc.


– Yii adopts the proven MVC architecture

– Yii allows developers to model database data in terms of objects and avoid the tedium and complexity of writing repetitive SQL statements

– With the help of Yii, collecting input extremely easy and safe

– Zero configuration required to let the task easier for you

– Thorough maintenance

  1. Cakephp

CakePHP is also popular among PHP developers due to its lightweight, simplicity, faster and require less code. It is easy to learn with fast and flexible templating. The built-in CRUD feature is very handy for database interaction. The framework also has various built-in features for security, email, session, cookie and request handling. It’s perfectly suited for commercial applications.


– MVC Pattern – Model support data handling, with the model class you can insert, update, delete or read the data from the database.

– ORM features, converting data between incompatible type systems in databases and object-oriented programming languages

– Proper class inheritance

– Easily extend with Components, Helpers, Behaviours, and Plug-ins

  1. Symfony

No doubt, Symfony is a stable and sustainable PHP framework. It is a flexible, scalable yet powerful. It has a huge community of Symfony fans committed to take PHP to the next level. Symfony has plenty of reusable PHP components that can be used like security, templating, translation, validator, form config and more. It’s easy to install and configure on most platforms and it’s database engine-independent.


– Based on the premise of convention over configuration–the developer needs to configure -only the unconventional

– Compliant with most web best practices and design patterns

– Enterprise-ready–adaptable to existing information technology

– Stable enough for long-term projects

No doubt, some of our readers will either agree, disagree or have other PHP Frameworks, which they consider the best. But that’s already nice that you’ve read this post and perhaps could contribute to it. So please feel free to add a comment or through light why this or that framework is so popular and why it should or shouldn’t be :)

Aliona Kavalevich

Aliona Kavalevich
Skype ID: aliona_kavalevich
Senior Business Development Manager (LI page)
Altabel Group – Professional Software Development


%d bloggers like this: